Permanent URI for this collection
Browse
Recent Submissions
Publication Topics in Casual Inference: Heterogeneous Treatment Effects and Sensitivity Analysis(2024-09) Hu, RuiUnderstanding causality from observational data is often of interest in scientific research. We propose methods for identifying and estimating causal parameters related to treatment effect heterogeneity and sensitivity analysis. In the first project, we study two causal parameters defined as functions of the conditional average treatment effect (CATE): the average squared CATE and the variance of CATE. These parameters measure the overall extent of treatment effect and treatment effect heterogeneity across the population, respectively. We develop efficient estimators for these parameters, which are guaranteed to be in the parameter space and allow for valid inference under both the null and alternative hypotheses. We demonstrate the practical performance of our methodology through numerical studies and a real data application. In the second and third projects, motivated by assessing evidence of causal effects of physical activity on mortality given potential unobserved confounding or confounder misclassification, we study sensitivity analyses to unobserved confounding and confounder misclassification with time-to-event outcomes. In the second project, we compare two types of causal sensitivity analyses using data from the NIH-AARP Study: to confounder misclassification, and to unobserved confounding. We find that the effect of physical activity on respiratory disease mortality is not explained away by a moderate amount of unobserved confounding or confounder misclassification. The effect of physical activity on cancer mortality is explained away by a small amount of unobserved confounding, but not by confounder misclassification. We hypothesize that the robustness to confounder misclassification could be due to assumptions of the measurement error model. These results indicate that existing sensitivity analysis tools require strong assumptions on the data generating process, making them not accurate enough to capture confounding structures in real-world scenarios. To tackle this limitation, in the third project, we extend a nonparametric sensitivity analysis framework (Chernozhukov et al., 2022) to time-to-event data. We propose efficient estimators to bound the causal effect as a function of unobserved confounding and demonstrate the performance of our proposed methods using numerical studies. We illustrate the use of proposed sensitivity analysis with data from the same prospective cohort study in the second project.Publication Nonparametric Inference using Shape Constraints and Bias Correction(2024-09) Wu, YujianIn the first chapter, we study the problem of nonparametric inference for a hazard ratio function under the constraint of monotonicity. The ratio of the hazard functions of two populations or two strata of a single population plays an important role in time-to-event analysis. Cox regression is commonly used to estimate the hazard ratio under the assumption that it is constant in time, which is known as the proportional hazards assumption. However, this assumption is often violated in practice, and when it is violated, the parameter estimated by Cox regression is difficult to interpret. The hazard ratio can be estimated in a nonparametric manner using smoothing, but smoothing-based estimators are sensitive to the selection of tuning parameters, and it is often difficult to perform valid inference with such estimators. In some cases, it is known that the hazard ratio function is monotone. In this chapter, we demonstrate that monotonicity of the hazard ratio function defines an invariant stochastic order, and we study the properties of this order. Furthermore, we introduce an estimator of the hazard ratio function under a monotonicity constraint. We demonstrate that our estimator converges in distribution to a mean-zero limit, and we use this result to construct asymptotically valid confidence intervals. Finally, we conduct numerical studies to assess the finite-sample behavior of our estimator, and we use our methods to estimate the hazard ratio of progression-free survival in pulmonary adenocarcinoma patients treated with gefitinib or carboplatin-paclitaxel. In the second chapter, we explore a novel nonparametric inference approach for a debiased kernel density estimator. Kernel density estimation is one of the most popular nonparametric methods for estimating probability density functions. However, it is well-known that kernel density estimators are biased. The robust bias correction approach proposed by Calonico et al. (2018) can effectively reduce this bias, leading to substantial improvements in confidence interval coverage. However, bias correction can result in negative density estimates. In this section, we propose bias correction and inference for kernel density estimators on the log density scale, which ensures positive density estimates wherever the original kernel density estimator is positive. We demonstrate our estimator is within oP(n−1) of the bias corrected estimator of Calonico et al. (2018), and that the t-statistic constructed with the logarithm-transformed estimator exhibits higher coverage accuracy compared to the t-statistic for the bias corrected estimator. Finally, we use an Edgeworth expansion of our estimator to demonstrate that the proposed approach yields the same rate of coverage error as that of Calonico et al. (2018). We conduct numerical studies illustrating the practical performance of our methods compared to ordinary and bias-corrected kernel density estimators. In the third chapter, we consider improving the monotonicity-constrained nonparametric inference with debiased kernel smoothing. The property of monotonicity plays an important role when dealing with survival data or regression relationships, and it is desired to have one estimator that is both monotone and smooth. However, monotonicity-constrained estimators can suffer from issues such as significant boundary bias, slower convergence rates, and lack of smoothness. Simply combining a monotone estimator with kernel smoothing can exacerbate these problems, leading to increased bias, loss of smoothness, and loss of monotonicity. In this section, our new method projects a debiased local linear regression estimator onto a monotonicity-constrained spline smoother. This resulting estimator adheres to shape constraints, ensures smoothness, achieves uniform consistency, reduces bias, and maintains a satisfactory rate of convergence. In the numerical study, we use bootstrap to demonstrate the superior performance of our estimator compared to the local linear estimator.Publication Matrix Factorizations and Khovanov Homology(2024-09) Wang, ArthurIn this thesis we develop a geometric interpretation for Rasmussen's spectral sequences using a construction for Khovanov-Rozansky link homology developed by Oblomkov and Rozansky. In the special case of Khovanov homology, we provide a proof for the geometric construction of Rasmussen's differentials by examining the relationship between matrix factorizations and Soergel bimodules. Finally we leverage the techniques developed in order to provide an alternative method for computing the Khovanov homology of knots and links.Publication Machine Learning for Chaotic Dynamical Systems(2024-09) Kennedy, ConnorThis dissertation is on the usage of machine learning for the study of dynamical systems, particularly chaotic dynamical systems. Chapter 1 provides a brief intro- duction to the fields of chaotic dynamical systems and machine learning as well as a small overview of chaptes 2-4 In chapter 2 we study the usage of machine learning methods to forecast the spread of COVID-19. We consider the Susceptible-Infected-Confirmed-Recovered- Deceased (SICRD) compartmental model, with the goal of estimating the unknown infected compartment I, and several unknown parameters. We apply a variation of a “Physics Informed Neural Network” (PINN), which uses knowledge of the system to aid learning. First, we ensure estimation is possible by verifying the model’s identifiability. Then, we propose a wavelet transform to process data for the network training. Finally, our central result is a novel modification of the PINN’s loss function to reduce the number of simultaneously considered unknowns. We find that our modified network is capable of stable, efficient, and accurate estimation, while the unmodified network consistently yields incorrect values. The modified network is also shown to be efficient enough to be applied to a model with time-varying parameters. We present an application of our model results in ranking states by estimated relative testing efficiency. Our findings suggest the effectiveness of our modified PINN network, especially in this case of multiple unknown variables. In chapter 3 we introduce the Discrete-Temporal Sobolev Network (DTSN), a neural network loss function that assists dynamical system forecasting by minimiz- ing variational differences between the network output and the training data via a temporal Sobolev norm. This approach is entirely data-driven, architecture agnos- tic, and does not require derivative information from the estimated system. The DTSN is particularly well suited to chaotic dynamical systems as it minimizes noise in the network output which is crucial for such sensitive systems. For our test cases we consider discrete approximations of the Lorenz-63 system and the Chua circuit. For the network architectures we use the Long Short-Term Memory (LSTM) and the Transformer. The performance of the DTSN is compared with the standard MSE loss for both architectures, as well as with the Physics Informed Neural Net- work (PINN) loss for the LSTM. The DTSN loss is shown to substantially improve accuracy for both architectures, while requiring less information than the PINN and without noticeably increasing computational time, thereby demonstrating its potential to improve neural network forecasting of dynamical systems. In chapter 4 we present a new method of performing extended dynamic mode decomposition (EDMD) for systems which admit a symbolic representation. EDMD generates an estimate Km of the Koopman operator K for a system by defining a dictionary of observables on the space and estimating K restricted to be invariant on the span of this dictionary. One of the most important questions of the EDMD is what should be chosen for the choice of dictionary? We consider a class of chaotic dynamical systems with a known or estimable generating partition. For these systems we construct an effective dictionary from indicators of the ”cylinder sets” which have great significance in defining the ”symbolic system” which uses the generating partition. We prove strong operator topology convergence for both the projection onto the span of our dictionary and for Km. We also prove practical finite step estimation bounds for the projection and Km as well. Finally we demonstrate some numerical applications of the algorithm to two example systems, the dyadic map and the logistic map. Finally chapter 5 briefly recaps the results of chapters 2-4 and discusses directions for potential future researchPublication Random Averaging Operators for Periodic Quadratic Derivative Wave Equations(2024-09) Katsaros, DeanThis thesis studies semilinear wave equations with quadratric derivative non- linearity |∇u|2 (qDNLW) from the probabilistic perspective. We first adapt a method of Bjoern Bringmann in [Bri21] to the d = 2 setting. This method goes beyond the linear-nonlinear decomposition due to Bourgain ([Bou94] and [Bou96]). This is contained in Chapter III. We improve local-in-time well-posedness results in the probabilistic setting in spatial dimensions 2 and 3 by constructing the Random Averaging Operators for (qDNLW). Local well-posedness is proven for data in the spaces H^{3/2+}(\mathbb{T}^3) and H^{11/8+}(\mathbb{T}^2). The space H^{11/8+} (\mathbb{T}^2) is supercritical with respect to the deterministic scaling. Both these results improve over both the best probabilistic and best deterministic results. These thresholds however lie 1+ and 3/8+ above the respective probabilistic scalings for the problem (qDNLW). The argument is constructive in that it is shown that the solution has an explicit expression as the linear combination of a Gaussian sum with adapted random matrix coefficients and a smooth remainder term. This is contained in Chapter IV.Publication The Lefschetz Standard Conjectures for Varieties of Generalized Kummer Deformation Type(2024-09) Foster, JosiahFor a projective $2n$-dimensional irreducible holomorphic symplectic manifold $Y$ of generalized Kummer deformation type and $j$ the smallest prime number dividing $n+1$, we prove the Lefschetz standard conjectures in degrees $<2(n+1)(j-1)/j$. We show that the restriction homomorphism from the cohomology of a projective deformation of a moduli space of Gieseker-stable sheaves on an Abelian surface to the cohomology of $Y$ is surjective in these degrees. An immediate corollary is that the Lefschetz standard conjectures hold for $Y$ when $n+1$ is prime. The proofs rely on Markman's description of the monodromy of generalized Kummer varieties and construction of a universal family of moduli spaces of sheaves, Verbitsky's theory of hyperholomorphic sheaves, and the cohomological decomposition theorem.Publication Tropicalizing the Graph Profiles of Some Collections of Trees(2024-09) Dascălu, MariaMany important problems in extremal graph theory can be stated as certifying polynomial inequalities in graph homomorphism numbers, and in particular, many ask to certify pure binomial inequalities. For a fixed collection of graphs U, the tropicalization of the graph profile of U essentially records all valid pure binomial inequalities involving graph ho- momorphism numbers for graphs in U. Building upon ideas and techniques described by Blekherman and Raymond in 2022, we compute the tropicalization of the graph profile for K1 and S2,1k -trees for 0 ≤ k ≤ m−1, that is, stars with k+1 branches, one of which is subdi- vided. We call these almost-stars. This allows pure binomial inequalities in homomorphism numbers (or densities) for these graphs to be verified through a linear program in m + 1 variables and m + 5 constraints. We give a conjecture for the f-vector of this tropicalization. We also present a conjecture for the tropicalization of the graph profile for K1, stars, and almost-stars.Publication Two Types of Methods for at Risk Populations(2024-09) Alvandi, AmirhosseinMultiple Systems Estimation (MSE) methods encompass a range of models designed for estimating the population size of a closed population using several partial lists. These approaches seek to estimate the probability of individuals appearing on various combinations of lists. We present and explore the most commonly used models, assessing their advantages and limitations through various simulated scenarios. We find some practical conditions in which all approaches perform poorly. Ultimately, we apply these models to approximate the total fatalities in the Kosovo conflict of 1999. Respondent-Driven Sampling (RDS) is a widely used method for recruiting samples from hidden or hard-to-reach populations through social connections among members. This chapter enhances traditional RDS, which captures connections via coupon-based recruitment, leading to a set of tree-structured networks, by introducing a novel augmentation involving the distribution of tokens. This augmentation allows for the exploration of otherwise missed cross-ties, enhancing our understanding of the clusters within these populations. Firstly, we adapt a variant of the logistic regression model from Ward et al. (2009), utilizing an Expectation-Maximization (EM) algorithm to handle positive-unlabeled data where observed ties are labeled, and non-observed ties and non-ties remain unlabeled. Secondly, we employ a conditional Exponential Random Graph Model (ERGM) for partially observed networks (Handcock and Gile, 2010) that approximates the maximum likelihood function by exploring the set of possible networks consistent with the observed ties. For both methods, we focus on modeling token ties by estimating the overall density of the subgraph of sampled nodes and treating it as observed data, which enables us to make likelihood-based inference without including the sampling design parameter. Additionally, we get around the need to model complex missingness structure of RDS by conditioning on RDS ties. The performance of our estimators of the subgraph of sampled nodes is assessed under various simulation settings.Publication Stable Dimensionality Reduction for Bound-Support Matrix Data(2024-09) Albert, GitanjaliWe consider the problem of developing interpretable and computationally efficient matrix decomposition methods for matrices whose entries have bounded support. Such matrices are found in large-scale DNA methylation studies, where the data is bounded by the unit interval. We present a family of decomposition strategies for (0,1)-bounded data based on the Doubly Non-Central Beta (DNCB) distribution. Our three factorization approaches are based on the CP and Tucker decompositions. Using an augment-and-marginalize approach, we derive computationally efficient sampling algorithms to solve for the latent factors. We evaluate the performance of our methods using the criteria of predictability, computability, and stability. Empirical results show that our two methods based on the DNCB distribution have similar or better performance as the state-of-the-art in terms of heldout prediction and computational complexity, but have significantly better performance in terms of stability to changes in hyperparameters. Inspired by advances in DNA sequencing technology, we develop a method based on the Conditional DNCB distribution, which allows for the incorporation of additional data collected by modern sequencing methods. This model yields similar guarantees on stability and tractability, while its density demonstrates interesting properties for measuring predictive capability. The improved stability of our models based on the Tucker decomposition results in higher confidence in the results in applications where the constituent factors are used to generate and test scientific hypotheses such as DNA methylation analysis of cancer samples.Publication On the Minimal Model of the Resolution of Symplectic Cyclic Quotients(2024-09) Hart, SeanTo any action of a finite group $ G $ on a closed symplectic $ 4 $-manifold $ ( M , \omega ) $, one can associate a symplectic resolution $ \pi : ( \widetilde{M} , \widetilde{\omega} ) \to M / G $. When $ b_+^G ( M ) \geq 2 $, equivariant Seiberg-Witten-Taubes theory implies the existence of an invariant collection $ \mathcal{K} $ of pseudoholomorphic curves representing $ c_1 ( K_\omega ) $ and containing (almost) all the fixed points, and when $ ( M , \omega ) $ is minimal with $ c_1 ( K_\omega )^2 = 0 $ the possibilities for $ \mathcal{K} $ and its symmetries are constrained. In this work, we explore the nature of $ ( \widetilde{M} , \widetilde{\omega} ) $ and its minimal model when $ ( M , \omega ) $ has symplecitc Kodaira dimension $ \kappa^s ( M , \omega ) = 1 $ and $ G = \Z / p $ for $ p $ prime by tracing the evolution of such a collection $ \mathcal{K} $ through the quotient $ M \to M / G $ and resolution $ \widetilde{M} \to M / G $, finally to the minimal model. We apply this to address a conjecture of Chen, showing for $ G = \Z / p $ that if $ \kappa^s ( M , \omega ) = 1 $ then $ \kappa^s ( \widetilde{M} , \widetilde{\omega} ) \in \{ 0 , 1 \} $.Publication Sequential Experiment Design via Investing(2024-05) Cook, Thomas John JeneralczukOnline FDR methods have recently been developed to address the need for procedures that maintain FDR control for a sequence of tests when the test statistics are not all known at one time. State-of-the-art online FDR control, ``$alpha$-investing'', methods do not address the need for testing when the cost of data is not negligible. We propose a novel $alpha$-investing method for a setting that takes into account the cost of data sample collection, the sample size choice, and prior beliefs about the probability of rejection. Our specific contributions are a theoretical analysis of the long term asymptotic behavior of $alpha$-wealth in an $alpha$-investing procedure, a generalized $alpha$-investing procedure for sequential testing that simultaneously optimizes sample size and $alpha$-level using game-theoretic principles, and a non-myopic $alpha$-investing procedure that maximizes the expected reward over a finite horizon of tests. Empirical results show that a cost-aware ERO decision rule correctly rejects more false null hypotheses than other methods for a fixed sample size of $n=1$. On real data sets from biological experiments, empirical results show that cost-aware ERO balances the allocation of samples to an individual test against the allocation of samples across multiple tests. A recent perspective on sequential testing, named ``testing by betting'', poses the process as a repeated betting game between the investigator and nature. The investigator's wealth process across the repeated games can be used to provide continuous (time-uniform) control of the false positive rate, termed emph{safe, anytime-valid inference}. We draw a fundamental connection between concepts in mathematical finance and sequential testing by treating the test wealth process as an asset. Our work builds on this notion to construct derivative contracts on the wealth process, in particular, options contracts. These assets and options allow the investigator to hedge against the risk of ruin while maintaining anytime-valid error guarantees, providing the first forward-looking contracts that provably protect against ruin. Empirical results demonstrate that these derivative contracts can eliminate the risk of ruin without significant impact to the test's power. Modern A/B testing platforms offer tools to continuously monitor data and adaptively update the treatment assignment policy. When the purpose of the A/B test is to perform statistical inference of the Average Treatment Effect (ATE), an investigator would like to design an adaptive experiment to minimize uncertainty in the ATE estimate while maintaining time-uniform error control to accommodate data-dependent stopping times. We provide a central limit theorem, under weaker assumptions than previous literature, for a semiparametric efficient Adaptive Augmented Inverse-Probability Weighted estimator, enabling its use in more general settings, and derive both asymptotic and nonasymptotic confidence sequences that are considerably tighter than previous methods while maintaining time-uniform error control.Publication A Category of Sample Spaces(1971) Collins, Walter RobertPublication Degenerating variations of mixed Hodge structures(1996) Gaze, Eric C.Pure Hodge structures degenerating along a normal-crossings divisor determine variations of mixed Hodge structures on the latter, whose properties as described by the Orbit theorems are well known. We are interested in extending these results to more general variations of mixed Hodge structures.Publication Properties of Singular Schubert Varieties(2013-09) Koonz, JenniferThis thesis deals with the study of Schubert varieties, which are subsets of flag varieties indexed by elements of Weyl groups. We start by defining Lascoux elements in the Hecke algebra, and showing that they coincide with the Kazhdan-Lusztig basis elements in certain cases. We then construct a resolution (Zw, π) of the Schubert variety Xw for which Rπ*(C[l(w)]) is a sheaf on Xw whose expression in the Hecke algebra is closely related to the Lascoux element. We also define two new polynomials which coincide with the intersection cohomology Poincar\'e polynomial in certain cases. In the final chapter, we discuss some interesting combinatorial results concerning Bell and Catalan numbers which arose throughout the course of this work.Publication Twisted Weyl Group Multiple Dirichlet Series Over the Rational Function Field(2013-09) Friedlander, Holley AnnLet K be a global field. For each prime p of K, the p-part of a multiple Dirichlet series defined over K is a generating function in several variables for the p-power coefficients. Let _ be an irreducible, reduced root system, and let n be an integer greater than 1. Fix a prime power q 2 Z congruent to 1 modulo 2n, and let Fq(T) be the field of rational functions in T over the finite field Fq of order q. In this thesis, we examine the relationship between Weyl group multiple Dirichlet series over K = Fq(T) and their p-parts, which we define using the Chinta-Gunnells method [10]. Our main result shows that Weyl group multiple Dirichlet series of type _ over Fq(T) may be written as the finite sum of their p-parts (after a certain variable change), with “multiplicities" that are character sums. This result gives an analogy between twisted Weyl group multiple Dirichlet series over the rational function field and characters of representations of semi-simple complex Lie algebras associated to _. Because the p-parts and global series are closely related, the result above follows from a series of local results concerning the p-parts. In particular, we give an explicit recurrence relation on the coefficients of the p-parts, which allows us to extend the results of Chinta, Friedberg, and Gunnells [9] to all _ and n. Additionally, we show that the p-parts of Chinta and Gunnells [10] agree with those constructed using the crystal graph technique of Brubaker, Bump, and Friedberg [4, 5] (in the cases when both constructions apply).Publication Martingale Central Limit Theorem and Nonuniformly Hyperbolic Systems(2013-09) Mohr, LukeIn this thesis we study the central limit theorem (CLT) for nonuniformly hyperbolic dynamical systems. We examine cases in which polynomial decay of correlations leads to a CLT with a non-standard scaling factor of √ n ln n. We also formulate an explicit expression for the the diffusion constant σ in situations where a return time function on the system is a certain class of supermartingale. We then demonstrate applications by exhibiting the CLT for the return time function in four classes of dynamical billiards, including one previously unproven case, the skewed stadium, as well as for the linked twist map. Finally, we introduce a new class of billiards which we conjecture are ergodic, and we provide numerical evidence to support that claim.Publication Conditional Gaussian Fluctuations and Refined Asymptotics of the Spin In the Phase-Coexistence Region(2013-09) Li, JingranIn this dissertation four results are presented on the fluctuations of the spin per site around the thermodynamic magnetization in the mean-field Blume-Capel model, a basic model in statistical mechanics. The first two results refine the main theorem in a 2010 paper by R. S. Ellis, J. Machta, and P. T. Otto published in Annals of Applied Probability 20 (2010) 2118-2161. This paper provides the first rigorous confirmation of the statistical mechanical theory of finite-size scaling for a mean-field model. The first main result studies the asymptotics of the centered, finite-size magnetization, giving its precise rate of convergence to 0 along parameter sequences lying in the phase-coexistence region and converging sufficiently slowly to either a second-order point or the tricritical point of the model. A simple inequality yields our second main result, which generalizes the main theorem in the Ellis-Machta-Otto paper by giving an upper bound on the rate of convergence to 0 of the absolute value of the difference between the finite-size magnetization and the thermodynamic magnetization. These first two results have direct relevance to the theory of finite-size scaling. They are consequences of the third main result. This is a new conditional limit theorem for the spin per site, where the conditioning allows us to focus on a neighborhood of the pure states having positive thermodynamic magnetization. The fourth main result is a conditional central limit theorem showing that the fluctuations of the spin per site are Gaussian in a neighborhood of the pure states having positive thermodynamic magnetization.Publication Open Books on Contact Three Orbifolds(2013-09) Herr, DanielIn 2002, Giroux showed that every contact structure had a corresponding open book decomposition. This was the converse to a previous construction of Thurston and Winkelnkemper, and made open books a vital tool in the study of contact three-manifolds. We extend these results to contact orbifolds, i.e. spaces that are locally diffeomorphic to the quotient of a contact manifold and a compatible finite group action. This involves adapting some of the main concepts and constructions of three dimensional contact geometry to the orbifold setting.Publication Class Numbers of Ray Class Fields of Imaginary Quadratic Fields(2009-05) Kucuksakalli, OmerLet K be an imaginary quadratic field with class number one and let [Special characters omitted.] be a degree one prime ideal of norm p not dividing 6 d K . In this thesis we generalize an algorithm of Schoof to compute the class number of ray class fields [Special characters omitted.] heuristically. We achieve this by using elliptic units analytically constructed by Stark and the Galois action on them given by Shimura's reciprocity law. We have discovered a very interesting phenomena where p divides the class number of [Special characters omitted.] . This is a counterexample to the elliptic analogue of a well-known conjecture, namely the Vandiver's conjecture.Publication Data Combination from Multiple Sources Under Measurement Error(2013-02) Gasca-Aragon, HugoRegulatory Agencies are responsible for monitoring the performance of particular measurement communities. In order to achieve their objectives, they sponsor Intercomparison exercises between the members of these communities. The Intercomparison Exercise Program for Organic Contaminants in the Marine Environment is an ongoing NIST/NOAA program. It was started in 1986 and there have been 19 studies to date. Using this data as a motivation we review the theory and practices applied to its analysis. It is a common practice to apply some kind of filter to the comparison study data. These filters go from outliers detection and exclusion to exclusion of the entire data from a participant when its measurements are very “different". When the measurements are not so “different" the usual assumption is that the laboratories are unbiased then the simple mean, the weighted mean or the one way random effects model are applied to obtain estimates of the true value. Instead we explore methods to analyze these data under weaker assumptions and apply them to some of the available data. More specifically we explore estimation of models assessing the laboratories performance and way to use those fitted models in estimating a consensus value for new study material. This is done in various ways starting with models that allow a separate bias for each lab with each compound at each point in time and then considering generalizations of that. This is done first by exploiting models where, for a particular compound, the bias may be shared over labs or over time and then by modeling systematic biases (which depend on the concentration) by combining data from different labs. As seen in the analyses, the latter models may be more realistic. Due to uncertainty in the certified reference material analyzing systematic biases leads to a measurement error in linear regression problem. This work has two differences from the standard work in this area. First, it allows heterogeneity in the material being delivered to the lab, whether it be control or study material. Secondly, we make use of Fieller's method for estimation which has not been used in the context before, although others have suggested it. One challenge in using Fieller's method is that explicit expressions for the variance and covariance of the sample variance and covariance of independent but non-identically distributed random variables are needed. These are developed. Simulations are used to compare the performance of moment/Wald, Fieller and bootstrap methods for getting confidence intervals for the slope in the measurement model. These suggest that the Fieller's method performs better than the bootstrap technique. We also explore four estimators for the variance of the error in the equation in this context and determine that the estimator based on the modified squared residuals outperforms the others. Homogeneity is a desirable property in control and study samples. Special experiments with nested designs must be conducted for homogeneity analysis and assessment purposes. However, simulation shows that heterogeneity has low impact on the performance of the studied estimators. This work shows that a biased but consistent estimator for the heterogeneity variance can be obtained from the current experimental design.