- 1. Show that the decision boundary for logistic regression is actually bX = 0 as I claimed. Show that this is actually an equation for a line in two dimensions as I claimed.
- 2. The formula I gave for P(X) in the MAP classification is only true for the two-class case. Write the more general formula for fc-class classification.
- 3. Derive the decision boundary for naive Bayes in the two-class Gaussian case. Show that in the case of uniform priors and equal covariance between the positive and negative classes, naive Bayes is the same as LDA with diagonal covariance.
- 4. I said that discriminative classification methods don’t use the ML rule. Make assumptions about priors to apply the ML rule to logistic regression. (Hint: Use Bayes theorem.)
- 5. Notice that the parameters I gave for the model, f (in the DNA binding site example) were not the maximum likelihood estimates based on the positive training examples that I showed in the table. What are the MLEs for the positive class? Why didn’t I use them for the classifier?