Fitting And Regression
Project Proposals
Overview
In this project, you have two options for the general route you can take:
- Reimplement analysis from the literature.
- New, exploratory analysis of existing data.
More details in final project guidelines.
Project Proposals
The proposal should be less than one page and describe the following items:
- Why the topic you chose is interesting
- Demonstrate that your project fits the criteria above
- What overall approach do you plan to take for the project and why
- Demonstrate that your project can be finished within a month
- Estimate the difficulty of your project
We are available to discuss your ideas whenever you are ready, and you should discuss your idea with us prior to submitting your proposal.
Recommend an early start—the earlier you finalize a proposal the sooner you can begin the project.
Application
Paternal de novo mutations
Questions:
- Where do de novo mutations arise?
- Are there factors that influence the rate of de novo mutations from one generation to another?
- Use David Q as an example of de novo mutations.
Application: Paternal de novo mutations
What is different about paternal vs. maternal meiosis?
Application: Paternal de novo mutations
Application: Paternal de novo mutations
Fitting
Goal Of Fitting
- Fitting is the process of comparing a model to a compendium of data
- After fitting, we will have a model that explains existing data and can predict new data
- Use AlphaFold activity?
- Method of the year in 2021.
Process of Fitting
The process of fitting is nothing more than finding the maximum likelihood distribution of models for a set of points.
The key factor is how one defines the problem—i.e., how the distribution is described.
Caveats
- Any fitting result is highly dependent upon the correctness of the model
- Successful fitting requires concordance between the model and data
- Too little data and a model is underdetermined
- Unaccounted for variables can lead to systematic error
Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. ~George Box, J American Stat Assoc, 1976
- Von Neumann: The truth is too complicated to allow anything but approximation.
Any Fitting Depends on Model Correctness
Fitting does not happen in a vacuum!
Walk through model, what the model is, etc.
Ordinary Least Squares
Ordinary Least Squares
- Probably the most widely used estimation technique.
- Based on extending the maximum likelihood estimate of a distribution.
- Model assumes output quantity is linear combination of inputs.
- Every model will have:
- Mathematical definition
- Geometric interpretation
- Intuitive “feel”
Ordinary Least Squares
If we have a vector of \(n\) observations \(\mathbf{y}\), our predictions are going to follow the form:
\[ \mathbf{y} = \mathbf{X} \beta + \mathbf{\epsilon} \]
Here \(\mathbf{X}\) is a \(n \times p\) structure matrix, \(\beta\) is a p-dimensional vector with the parameters of our model, and \(\mathbf{\epsilon} = \left( \epsilon_1, \epsilon_2, ... \epsilon_n \right)'\) is the noise present in the model.
Ordinary Least Squares
\[ \mathbf{y} = \mathbf{X} \beta + \mathbf{\epsilon} \]
\(\mathbf{\epsilon}\) is usually handled to be uncorrelated random components with constant variance \(\sigma^2\):
\[ \mathbf{\epsilon} \sim \left( \mathbf{0}, \sigma^2 \mathbf{I} \right) \]
Graph this
Ordinary Least Squares
Single variable case
The structure matrix is little more than the data, sometimes transformed, usually with an offset. So, another way to write:
\[ \mathbf{y} = \mathbf{X} \beta + \mathbf{\epsilon} \]
would be:
\[ \mathbf{y} = m_1 \mathbf{x_1} + m_2 \mathbf{x_2} \ldots + b + \mathbf{\epsilon} \]
- Go through why you would need to transform the data
- Write out on the board how this corresponds to shifted distributions
Ordinary Least Squares
Single variable case
\[ \mathbf{y} = m \mathbf{x} + b + \mathbf{\epsilon} \]
The values of \(m\) and \(b\) that minimize the distance from \(y\) are optimal, and they don’t depend on \(\epsilon\).
Ordinary Least Squares
Gauss and Markov in the early 1800s identified that the least squares estimate of \(\beta\), \(\hat{\beta}\), is:
\[ \hat{\mathbf{\beta}} = \arg\min_{\beta}{\left\Vert \mathbf{y} - \mathbf{X} \beta \right\Vert^{2}} \]
Ordinary Least Squares
\[ \hat{\mathbf{\beta}} = \arg\min_{\beta}{\left\Vert \mathbf{y} - \mathbf{X} \beta \right\Vert^{2}} \]
can be directly calculated by:
\[ \hat{\beta} = \mathbf{S}^{-1} \mathbf{X}' \mathbf{y} \]
where
\[ \mathbf{S} = \mathbf{X}'\mathbf{X} \]
\[ \frac{\delta}{\delta \beta} \left(\left\Vert \mathbf{y} - \mathbf{X} \beta \right\Vert^{2}\right) = 0 \] \[ 2 X^T (y - X \beta ) = 0 \] \[ 2 X^T y = 2X^T X \beta \] \[ \beta = \left(X^T X \right)^{-1} X^T y \]
Ordinary Least Squares
\(\hat{\beta}\) is the maximum likelihood estimate of \(\beta\), and has covariance \(\sigma^2 \mathbf{S}^{-1}\):
\[\hat{\beta}\sim\left(\beta,\sigma^2 \mathbf{S}^{-1}\right)\]
In the normal case (when our assumptions hold), \(\hat{\beta}\) is an unbiased estimator of \(\beta\). Making these calculations tractable for larger data sets used to be a challenge but is now trivial.
- How do we know \(\sigma\)?
- Stdev is best estimate of \(\sigma\)
- So just stdev of residuals
Ordinary Least Squares
Likelihood of Model
\[ \frac{-n}{2} \log(\sigma^{2}) - \frac{1}{2 \sigma^{2}} \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} \]
therefore, only considering \(\beta\) (the only factor that influences predictions), we need to minimize:
\[ \sum_{i=1}^{n} (y_{i}-x_{i} \beta)^{2} \]
Exactly how we calculate \(\beta\)!
Go through where this arises from. Leave this up for later.
\[ f(x | \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \]
\[ \ln f(x | \mu, \sigma^2) = -\frac{1}{2}\ln(2\pi\sigma^2) - \frac{(x-\mu)^2}{2\sigma^2} \]
\[ \ln L(\mu, \sigma^2 | x_1, x_2, \ldots, x_n) = -\frac{n}{2}\ln(2\pi\sigma^2) - \frac{1}{2\sigma^2}\sum_{i=1}^{n}(x_i-\mu)^2 \]
\[ \ln L(\beta, \sigma^2 \mid y) = -\frac{n}{2}\ln(2\pi\sigma^2) - \frac{1}{2\sigma^2} (y - X\beta)'(y - X\beta). \]
Ordinary Least Squares
What might be some advantages of a method such as this?
- Only has p parameters.
- Can be directly calculated from data (without optimization procedure. Fit and uncertainty can be directly calculated.)
- Fast
- Easily quantify error
- Scalable
- Clear assumptions
- Clearly interpretable
Ordinary Least Squares
What are some of the assumptions?
What are the implications of these assumptions not holding?
What are some downsides?
- Very sensitive to outliers.
- Fails for p > n.
\[ \frac{\partial \ln L}{\partial \beta} = \frac{1}{\sigma^2}X^T(y-X\beta) \]
Implementation
scikit-learn provides a very basic function for ordinary least squares.
sklearn.linear_model.LinearRegression
fit_intercept
: Should an intercept value be fit?normalize
: Should the input variables be mean and variance scaled?
- No tests for significance/model performance included.
- We’ll discuss evaluating the model in depth later.
Or there’s an even more bare function in NumPy: numpy.linalg.lstsq
.
- Takes input variables
A
andB
. - Solves the equation \(Ax=B\) by computing a vector \(x\) that minimizes the Euclidean 2-norm \(\lVert B-Ax \rVert^2\).
Implementation
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_diabetes
from matplotlib import pyplot as plt
= LinearRegression()
lr = load_diabetes()
data
= data.target
y # X, y
lr.fit(data.data, y)
= lr.predict(data.data)
predicted
= plt.subplots()
fig, ax =(0, 0, 0))
ax.scatter(y, predicted, edgecolorsmin(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.plot([y.'Measured')
ax.set_xlabel('Predicted') ax.set_ylabel(
- Run this as a notebook.
- Play with options.
Implementation
Non-Linear Least Squares
Non-Linear Least Squares
Non-Linear Least Squares makes similar assumptions to ordinary least squares, but for arbitrary functions. Thus, instead of following the form:
\[ \mathbf{y} = \mathbf{X} \beta + \mathbf{\epsilon} \]
Our input-output relationship is:
\[ \mathbf{y} = f(\mathbf{X}, \beta) + \mathbf{\epsilon} \]
for the same construction of \(\mathbf{\epsilon}\).
- Independent points
- Normal error
- \(f(x)\) relationship
Transformation
NNLSQ used to be mostly performed by transforming one’s data into a linear model.
For instance, by taking the ratio of variables, or log-transforming them.
This is now considered bad practice.
Why?
- Distorts error term.
- But made it easier to calculate.
Non-Linear Least Squares
Algorithms
We again need to solve for \(\beta\) to minimize the sum of squared error:
- There are many methods to solve these problems, and finding the true minimum is not a trivial task.
- We’re not going to cover how these algorithms work in depth.
Non-Linear Least Squares
Algorithms
One property we can take advantage of is that the gradient of the SSE w.r.t. \(\beta\) at the minimum is zero (\(r_i\) is the residual of the \(i\)th point):
\[ {\frac {\partial S}{\partial \beta_j}}=2\sum_i r_i{\frac {\partial r_i}{\partial \beta_j}}=0 \]
- \(\frac{\partial r_i}{\partial \beta_j}\) is a function of both the nonlinear function and the data.
- This can be expanded out through a first-order Taylor approximation.
- Doing so essentially performs ordinary least squares around the current point, for the linearized function.
Non-Linear Least Squares
Algorithms
\[ {\frac {\partial S}{\partial \beta_j}}=2\sum_i r_i{\frac {\partial r_i}{\partial \beta_j}}=0 \]
- \(\frac{\partial r_i}{\partial \beta_j}\) is a function of both the nonlinear function and the data.
- This can be expanded out through a first-order Taylor approximation.
- Doing so essentially performs OLS around the current point.
- \({\frac{\partial r_i}{\partial \beta_j}}= -J_{ij}\), where \(J\) is the Jacobian of the function.
- Many NNLSQ solvers require \(J\) for this reason: can be approximated by finite differences.
- Probably the most common method, Gauss-Newton, uses this property with Newton’s method.
Non-Linear Least Squares
Algorithms - Key Takeaways
- Unlike ordinary least squares, no guarantee about finding the optimal solution.
- Depending upon the data and model, there may be many local minima.
- Exactly equivalent to shifting normal distributions up and down around one’s data.
Implementation
SciPy’s scipy.optimize.least_squares
is a very capable implementation.
- The main necessary parameters are:
fun
, the functionx0
, an initial guess for the parameter values
- Note that
fun
should return a vector of the residuals- So it should handle all the data itself
NNLS Example
Binding Data
Let’s say we’re looking at a protein-protein interaction such as this:
'.');
plt.semilogx(X, Y,'Concentration [nM]')
plt.xlabel('Binding') plt.ylabel(
- Run this as a notebook.
NNLS Example
Binding Data
We can predict the amount of binding we’d observe from a single-site binding model:
def klotz1(k1, lig):
return (k1*lig)/(1 + k1*lig)
NNLS Example
Binding Data
1.,X),'.')
plt.semilogx(X,klotz1('Concentration [nM]')
plt.xlabel('Binding') plt.ylabel(
NNLS Example
Binding Data
SciPy asks for the residuals at each fitting point, so we need to convert a prediction to that:
def ls_obj_k1(k1, ligs, data):
return data - klotz1(k1, ligs)
NNLS Example
Binding Data
1., args=(X,Y))
sp.optimize.least_squares(ls_obj_k1, # --------
0.])
active_mask: array([ 0.0086776496708916573
cost: 4.79e-05, 9.00e-05, -1.09e-04,
fun: array([ 8.04e-04, -9.67e-04, 3.85e-03,
4.61e-03, 2.34e-03, 2.36e-02,
9.64e-03, -2.48e-02, 1.93e-02,
-4.93e-02, 5.54e-02, -3.66e-02,
2.97e-03, 3.39e-02, -8.74e-02])
-9.57228474e-09])
grad: array([ -0.00099809],
jac: array([[-0.00199235],
[-0.0039695 ],
[# ...
-0.01608763],
[-0.00817133]])
['`gtol` termination condition is satisfied.'
message: 4
nfev: 4
njev: 9.5722847420895082e-09
optimality: 1
status: True
success: 0.95864059]) x: array([
Generalized Linear Model
What if the error term isn’t Gaussian?
- In many cases linear regression can be inappropriate
- E.g. A measurement that is Poisson distributed
- CLT works to your benefit here.
Review
Questions
- Given the binding data presented here, do you think a least squares model is most appropriate?
- How might you test whether your data seems to follow the assumptions of your model?
- Can use ks-test in point 2.
Reading & Resources
- 📖: Computer Age Statistical Inference, Chapter 8
- 📖: Points of Significance: Simple linear regression
- 💾: sklearn: Linear Models
- 👂: Linear Digressions: The assumptions of ordinary least squares
- 👂: Linear Digressions: Convex (and non-convex) optimization
- In summary:
- Fitting is the process of optimizing the model.
- OLS is exceptional in that we can directly calculate the answer.
- OLS and NNLS shift normal distributions up and down around the line of prediction.
- GLM can handle other distributions.
Review Questions
- Are OLS or NNLS guaranteed to find the solution?
- How are new points predicted to be distributed in OLS?
- How are new points predicted to be distributed in NNLS?
- How might you determine whether the assumptions you made when running OLS are valid? (Hint: Think about the tests from lecture 1.)
- What is a situation in which the statistical assumptions of OLS can be valid but calculating a solution fails?
- You’re not sure a function you wrote to calculate the OLS estimator is working correctly. What could you check to ensure you have the right answer? (Hint: Slide 17)
- You’ve made a monitor that you know provides a voltage proportional to blood glucose. Design a scheme for a model to convert from voltage to blood glucose, using some set of calibration points. What is the absolute minimum number of calibration points you’d need? How would you expect new calibration points to be distributed?
- A team member suggests that the voltage-glucose relationship from (7) is log-linear instead of linear, and so suggests using log(V) with OLS instead. When would this be alright? What is an alternative approach?