EXERCISES

  • 1. What is the difference between a bivariate Gaussian model with a positive term in the off-diagonal term in the covariance matrix and a linear regression model with b1 greater than 0?
  • 2. Why did I derive only two MLEs for linear regression when there are three parameters in the model?
  • 3. Draw graphical models representation of the conditional dependence structure for simple linear regression and logistic regression.
  • 4. Write the likelihood for a regression-like model where Y is assumed to have a Poisson distribution (as opposed to the standard Gaussian). Why is this model more realistic for analysis of the relationship between mRNA levels and protein levels as in the example discussed above?
  • 5. Write the log-likelihood for the regression model E[Y|X] = b0 + b1X + b2X2 with Gaussian distributed errors, and derive a formula for the MLE by taking the derivative with respect to b2, setting it to 0, and solving for b2.
  • 6. Why is Gaussian noise in log space considered “multiplicative?” (Hint: Another way to write the simple regression model is P[Y|X] = b0 + b1X + N(0,6).)
  • 7. Show that Kernel regression converges to a simple average in the limit of infinite bandwidth and a Gaussian Kernel. What happens to the Kernel regression estimate when bandwidth goes to 0?
  • 8. Take derivatives of the weighted SSR with respect to the parameters and set them to zero to find out how to maximize the weighted SSR in LOESS.

REFERENCES AND FURTHER READING

Cleveland WS, Devlin SJ. (1988). Locally weighted regression: An approach to regression analysis by local fitting. J. Am. Stat. Assoc. 83(403):596-640.

Csardi G, Franks A, Choi DS, Airoldi EM, Drummond DA. (May 7, 2015). Accounting for experimental noise reveals that mRNA levels, amplified by post-transcriptional processes, largely determine steady-state protein levels in yeast. PLoS Genet. 11(5):e1005206.

 
Source
< Prev   CONTENTS   Source   Next >