mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.1K
active users

#statme

0 posts0 participants0 posts today
arXiv logo
arXiv.orgEFECT: A Method to Quantify the Reproducibility of Stochastic SimulationsReproducibility is a fundamental requirement for validating scientific claims in computational research. Stochastic computational models are widely used in fields such as systems biology, financial modeling and environmental sciences. However, achieving reproducibility in stochastic simulations remains challenging, as each run can produce different outcomes. Existing infrastructure and software tools do not address independent reproduction of simulation results. Without independent reproducibility, results and conclusions lack credibility, as it remains unclear whether observed findings reflect model behavior or are artifacts of stochastic variation or an underpowered study. To bridge this gap, we introduce the Empirical Characteristic Function Equality Convergence Test (EFECT), a data-driven method to quantify the reproducibility of stochastic simulation results. EFECT employs empirical characteristic functions to compare reported results with those independently generated by assessing distributional inequality, termed EFECT error. Additionally, we establish the EFECT convergence point, a quantitative metric for determining the required number of simulation runs to achieve an EFECT error value of a priori significance. EFECT is applicable to all bounded, real-valued outputs, regardless of the model type or simulation method that produced them. We tested EFECT with over 40 use cases to demonstrate its broad applicability and effectiveness. EFECT standardizes stochastic simulation reproducibility, establishing a workflow that guarantees reliable results, supporting a wide range of stakeholders, and thereby enhancing validation of stochastic simulation studies, across a model's lifecycle. To promote standardization, we are developing the open-source software library libSSR in multiple programming languages for easy integration of EFECT.
arXiv logo
arXiv.orgPartial identification via conditional linear programs: estimation and policy learningMany important quantities of interest are only partially identified from observable data: the data can limit them to a set of plausible values, but not uniquely determine them. This paper develops a unified framework for covariate-assisted estimation, inference, and decision making in partial identification problems where the parameter of interest satisfies a series of linear constraints, conditional on covariates. In such settings, bounds on the parameter can be written as expectations of solutions to conditional linear programs that optimize a linear function subject to linear constraints, where both the objective function and the constraints may depend on covariates and need to be estimated from data. Examples include estimands involving the joint distributions of potential outcomes, policy learning with inequality-aware value functions, and instrumental variable settings. We propose two de-biased estimators for bounds defined by conditional linear programs. The first directly solves the conditional linear programs with plugin estimates and uses output from standard LP solvers to de-bias the plugin estimate, avoiding the need for computationally demanding vertex enumeration of all possible solutions for symbolic bounds. The second uses entropic regularization to create smooth approximations to the conditional linear programs, trading a small amount of approximation error for improved estimation and computational efficiency. We establish conditions for asymptotic normality of both estimators, show that both estimators are robust to first-order errors in estimating the conditional constraints and objectives, and construct Wald-type confidence intervals for the partially identified parameters. These results also extend to policy learning problems where the value of a decision policy is only partially identified. We apply our methods to a study on the effects of Medicaid enrollment.
arXiv logo
arXiv.orgPartial identification via conditional linear programs: estimation and policy learningMany important quantities of interest are only partially identified from observable data: the data can limit them to a set of plausible values, but not uniquely determine them. This paper develops a unified framework for covariate-assisted estimation, inference, and decision making in partial identification problems where the parameter of interest satisfies a series of linear constraints, conditional on covariates. In such settings, bounds on the parameter can be written as expectations of solutions to conditional linear programs that optimize a linear function subject to linear constraints, where both the objective function and the constraints may depend on covariates and need to be estimated from data. Examples include estimands involving the joint distributions of potential outcomes, policy learning with inequality-aware value functions, and instrumental variable settings. We propose two de-biased estimators for bounds defined by conditional linear programs. The first directly solves the conditional linear programs with plugin estimates and uses output from standard LP solvers to de-bias the plugin estimate, avoiding the need for computationally demanding vertex enumeration of all possible solutions for symbolic bounds. The second uses entropic regularization to create smooth approximations to the conditional linear programs, trading a small amount of approximation error for improved estimation and computational efficiency. We establish conditions for asymptotic normality of both estimators, show that both estimators are robust to first-order errors in estimating the conditional constraints and objectives, and construct Wald-type confidence intervals for the partially identified parameters. These results also extend to policy learning problems where the value of a decision policy is only partially identified. We apply our methods to a study on the effects of Medicaid enrollment.
arXiv logo
arXiv.orgLLM-based Agents for Automated Confounder Discovery and Subgroup Analysis in Causal InferenceEstimating individualized treatment effects from observational data presents a persistent challenge due to unmeasured confounding and structural bias. Causal Machine Learning (causal ML) methods, such as causal trees and doubly robust estimators, provide tools for estimating conditional average treatment effects. These methods have limited effectiveness in complex real-world environments due to the presence of latent confounders or those described in unstructured formats. Moreover, reliance on domain experts for confounder identification and rule interpretation introduces high annotation cost and scalability concerns. In this work, we proposed Large Language Model-based agents for automated confounder discovery and subgroup analysis that integrate agents into the causal ML pipeline to simulate domain expertise. Our framework systematically performs subgroup identification and confounding structure discovery by leveraging the reasoning capabilities of LLM-based agents, which reduces human dependency while preserving interpretability. Experiments on real-world medical datasets show that our proposed approach enhances treatment effect estimation robustness by narrowing confidence intervals and uncovering unrecognized confounding biases. Our findings suggest that LLM-based agents offer a promising path toward scalable, trustworthy, and semantically aware causal inference.
#cslg#csai#csma
arXiv logo
arXiv.orgA test statistic, $h^*$, for outlier analysisOutlier analysis is a critical tool across diverse domains, from clinical decision-making to cybersecurity and talent identification. Traditional statistical outlier detection methods, such as Grubb's test and Dixon's Q, are predicated on the assumption of normality and often fail to reckon the meaningfulness of exceptional values within non-normal datasets. In this paper, we introduce the h* statistic, a novel parametric, frequentist approach for evaluating global outliers without the normality assumption. Unlike conventional techniques that primarily remove outliers to preserve statistical `integrity,' h* assesses the distinctiveness as phenomena worthy of investigation by quantifying a data point's extremity relative to its group as a measure of statistical significance analogous to the role of Student's t in comparing means. We detail the mathematical formulation of h* with tabulated confidence intervals of significance levels and extensions to Bayesian inference and paired analysis. The capacity of h* to discern between stable extraordinary deviations and values that merely appear extreme under conventional criteria is demonstrated using empirical data from a mood intervention study. A generalisation of h* is subsequently proposed, with individual weights assigned to differences for nuanced contextual description, and a variable sensitivity exponent for objective inference optimisation and subjective inference specification. The physical significance of an h*-recognised outlier is linked to the signature of unique occurrences. Our findings suggest that h* offers a robust alternative for outlier evaluation, enriching the analytical repertoire for researchers and practitioners by foregrounding the interpretative value of outliers within complex, real-world datasets. This paper is also a statement against the dominance of normality in celebration of the luminary and the lunatic alike.
arXiv logo
arXiv.orgA test statistic, $h^*$, for outlier analysisOutlier analysis is a critical tool across diverse domains, from clinical decision-making to cybersecurity and talent identification. Traditional statistical outlier detection methods, such as Grubb's test and Dixon's Q, are predicated on the assumption of normality and often fail to reckon the meaningfulness of exceptional values within non-normal datasets. In this paper, we introduce the h* statistic, a novel parametric, frequentist approach for evaluating global outliers without the normality assumption. Unlike conventional techniques that primarily remove outliers to preserve statistical `integrity,' h* assesses the distinctiveness as phenomena worthy of investigation by quantifying a data point's extremity relative to its group as a measure of statistical significance analogous to the role of Student's t in comparing means. We detail the mathematical formulation of h* with tabulated confidence intervals of significance levels and extensions to Bayesian inference and paired analysis. The capacity of h* to discern between stable extraordinary deviations and values that merely appear extreme under conventional criteria is demonstrated using empirical data from a mood intervention study. A generalisation of h* is subsequently proposed, with individual weights assigned to differences for nuanced contextual description, and a variable sensitivity exponent for objective inference optimisation and subjective inference specification. The physical significance of an h*-recognised outlier is linked to the signature of unique occurrences. Our findings suggest that h* offers a robust alternative for outlier evaluation, enriching the analytical repertoire for researchers and practitioners by foregrounding the interpretative value of outliers within complex, real-world datasets. This paper is also a statement against the dominance of normality in celebration of the luminary and the lunatic alike.
arXiv logo
arXiv.orgHow Can I Publish My LLM Benchmark Without Giving the True Answers Away?Publishing a large language model (LLM) benchmark on the Internet risks contaminating future LLMs: the benchmark may be unintentionally (or intentionally) used to train or select a model. A common mitigation is to keep the benchmark private and let participants submit their models or predictions to the organizers. However, this strategy will require trust in a single organization and still permits test-set overfitting through repeated queries. To overcome this issue, we propose a way to publish benchmarks without completely disclosing the ground-truth answers to the questions, while still maintaining the ability to openly evaluate LLMs. Our main idea is to inject randomness to the answers by preparing several logically correct answers, and only include one of them as the solution in the benchmark. This reduces the best possible accuracy, i.e., Bayes accuracy, of the benchmark. Not only is this helpful to keep us from disclosing the ground truth, but this approach also offers a test for detecting data contamination. In principle, even fully capable models should not surpass the Bayes accuracy. If a model surpasses this ceiling despite this expectation, this is a strong signal of data contamination. We present experimental evidence that our method can detect data contamination accurately on a wide range of benchmarks, models, and training methodologies.
#cslg#csai#cscl
arXiv logo
arXiv.orgDensity Prediction of Income Distribution Based on Mixed Frequency DataModeling large dependent datasets in modern time series analysis is a crucial research area. One effective approach to handle such datasets is to transform the observations into density functions and apply statistical methods for further analysis. Income distribution forecasting, a common application scenario, benefits from predicting density functions as it accounts for uncertainty around point estimates, leading to more informed policy formulation. However, predictive modeling becomes challenging when dealing with mixed-frequency data. To address this challenge, this paper introduces a mixed data sampling regression model for probability density functions (PDF-MIDAS). To mitigate variance inflation caused by high-frequency prediction variables, we utilize exponential Almon polynomials with fewer parameters to regularize the coefficient structure. Additionally, we propose an iterative estimation method based on quadratic programming and the BFGS algorithm. Simulation analyses demonstrate that as the sample size for estimating density functions and observation length increase, the estimator approaches the true value. Real data analysis reveals that compared to single-sequence prediction models, PDF-MIDAS incorporating high-frequency exogenous variables offers a wider range of application scenarios with superior fitting and prediction performance.
arXiv logo
arXiv.orgTransforming Sensitive Documents into Quantitative Data: An AI-Based Preprocessing Toolchain for Structured and Privacy-Conscious AnalysisUnstructured text from legal, medical, and administrative sources offers a rich but underutilized resource for research in public health and the social sciences. However, large-scale analysis is hampered by two key challenges: the presence of sensitive, personally identifiable information, and significant heterogeneity in structure and language. We present a modular toolchain that prepares such text data for embedding-based analysis, relying entirely on open-weight models that run on local hardware, requiring only a workstation-level GPU and supporting privacy-sensitive research. The toolchain employs large language model (LLM) prompting to standardize, summarize, and, when needed, translate texts to English for greater comparability. Anonymization is achieved via LLM-based redaction, supplemented with named entity recognition and rule-based methods to minimize the risk of disclosure. We demonstrate the toolchain on a corpus of 10,842 Swedish court decisions under the Care of Abusers Act (LVM), comprising over 56,000 pages. Each document is processed into an anonymized, standardized summary and transformed into a document-level embedding. Validation, including manual review, automated scanning, and predictive evaluation shows the toolchain effectively removes identifying information while retaining semantic content. As an illustrative application, we train a predictive model using embedding vectors derived from a small set of manually labeled summaries, demonstrating the toolchain's capacity for semi-automated content analysis at scale. By enabling structured, privacy-conscious analysis of sensitive documents, our toolchain opens new possibilities for large-scale research in domains where textual data was previously inaccessible due to privacy and heterogeneity constraints.
arXiv logo
arXiv.orgTransforming Sensitive Documents into Quantitative Data: An AI-Based Preprocessing Toolchain for Structured and Privacy-Conscious AnalysisUnstructured text from legal, medical, and administrative sources offers a rich but underutilized resource for research in public health and the social sciences. However, large-scale analysis is hampered by two key challenges: the presence of sensitive, personally identifiable information, and significant heterogeneity in structure and language. We present a modular toolchain that prepares such text data for embedding-based analysis, relying entirely on open-weight models that run on local hardware, requiring only a workstation-level GPU and supporting privacy-sensitive research. The toolchain employs large language model (LLM) prompting to standardize, summarize, and, when needed, translate texts to English for greater comparability. Anonymization is achieved via LLM-based redaction, supplemented with named entity recognition and rule-based methods to minimize the risk of disclosure. We demonstrate the toolchain on a corpus of 10,842 Swedish court decisions under the Care of Abusers Act (LVM), comprising over 56,000 pages. Each document is processed into an anonymized, standardized summary and transformed into a document-level embedding. Validation, including manual review, automated scanning, and predictive evaluation shows the toolchain effectively removes identifying information while retaining semantic content. As an illustrative application, we train a predictive model using embedding vectors derived from a small set of manually labeled summaries, demonstrating the toolchain's capacity for semi-automated content analysis at scale. By enabling structured, privacy-conscious analysis of sensitive documents, our toolchain opens new possibilities for large-scale research in domains where textual data was previously inaccessible due to privacy and heterogeneity constraints.