Sunday, December 29, 2019

Logistic Regression

            🥀MACHINE LEARNING🥀 

              Logistic Regression⁕
Every machine learning algorithm performs best under a given set  of conditions. To explore good performance, we must know which algorithm is best to use depending on the problem at hand. You can't just use one particular algorithm for all uses. For   example:  Linear regression algorithm can't be apply on a same prototype dependent variable. This is where Logistic Regression  get in.
🌝 Logistic Regression is a well known statistical model used for binary classification, that is for predictions of the type this or that, yes or no, A or B, etc..It can be used for multi class classification, but here we will focus on its easiest or simplest application. It is one of the most frequently used machine learning algorithms for binary classifications that translates the input to 0 or 1.  For example, 


  • 0: negative class
  • 1: positive class

The classification problem is just like the regression problem, except that the values we now want to predict take on only a small number of discrete values. For now, we will focus on the binary classification problem in which y can take on only two values, 0 and 1. (Most of what we say here will also generalize to the multiple-class case.) For instance, if we are trying to build a spam classifier for email, then x^{(i)} may be some features of a piece of email, and y may be 1 if it is a piece of spam mail, and 0 otherwise. Hence, y∈{0,1}. 0 is also called the negative class, and 1 the positive class, and they are sometimes also denoted by the symbols “-” and “+.” Given x^{(i)}, the corresponding y^{(i)} is also called the label for the training example.


🌝Hypothesis Representation


We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for h_\theta (x) to take values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}. To fix this, let’s change the form for our hypotheses h_\theta (x) to satisfy 0 \leq h_\theta (x) \leq 1. This is accomplished by plugging \theta^Tx into the Logistic Function.Our new form uses the "Sigmoid Function," also called the "Logistic Function":
The following image shows us what the sigmoid function looks like:
The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.
h_\theta(x) will give us the probability that our output is 1. For example, h_\theta(x)=0.7

 gives us a probability of 70% that our output is 1. Our probability that our prediction is 0 is just the complement of our probability that it is 1 (e.g. if probability that it is 1 is 70%, then the probability that it is 0 is 30%).
🌝Decision Boundary


In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows:
hθ(x)0.5y=1hθ(x)<0.5y=0
The way our logistic function g behaves is that when its input is greater than or equal to zero, its output is greater than or equal to 0.5:
g(z)0.5whenz0
Remember.
z=0,e0=1g(z)=1/2z,e0g(z)=1z,eg(z)=0
So if our input to g is \theta^T X, then that means:
hθ(x)=g(θTx)0.5whenθTx0
From these statements we can now say:
θTx0y=1θTx<0y=0
The decision boundary is the line that separates the area where y = 0 and where y = 1. It is created by our hypothesis function.


Example:
θ=510y=1if5+(1)x1+0x205x10x15x15
In this case, our decision boundary is a straight vertical line placed on the graph where x_1 = 5, and everything to the left of that denotes y = 1, while everything to the right denotes y = 0.
Again, the input to the sigmoid function g(z) (e.g. \theta^T X) doesn't need to be linear, and could be a function that describes a circle (e.g. z = \theta_0 + \theta_1 x_1^2 +\theta_2 x_2^2) or any shape to fit our data.
🌝Cost Function

We cannot use the same cost function that we use for linear regression because the Logistic Function will cause the output to be wavy, causing many local optima. In other words, it will not be a convex function.
Instead, our cost function for logistic regression looks like:
J(θ)=1mi=1mCost(hθ(x(i)),y(i))Cost(hθ(x),y)=log(hθ(x))Cost(hθ(x),y)=log(1hθ(x))if y = 1if y = 0
When y = 1, we get the following plot for J(\theta) vs h_\theta (x):








Similarly, when y = 0, we get the following plot for J(\theta) vs h_\theta (x):





Cost(hθ(x),y)=0 if hθ(x)=yCost(hθ(x),y) if y=0andhθ(x)1Cost(hθ(x),y) if y=1andhθ(x)0
If our correct answer 'y' is 0, then the cost function will be 0 if our hypothesis function also outputs 0. If our hypothesis approaches 1, then the cost function will approach infinity.
If our correct answer 'y' is 1, then the cost function will be 0 if our hypothesis function outputs 1. If our hypothesis approaches 0, then the cost function will approach infinity.
Note that writing the cost function in this way guarantees that J(θ) is convex for logistic regression.

🌝Simplified Cost Function and Gradient Descent

Cost(hθ(x),y)=ylog(hθ(x))(1y)log(1hθ(x))
Notice that when y is equal to 1, then the second term (1-y)\log(1-h_\theta(x)) will be zero and will not affect the result. If y is equal to 0, then the first term -y \log(h_\theta(x)) will be zero and will not affect the result.
We can fully write out our entire cost function as follows:
J(\theta) = - \frac{1}{m} \displaystyle \sum_{i=1}^m [y^{(i)}\log (h_\theta (x^{(i)})) + (1 - y^{(i)})\log (1 - h_\theta(x^{(i)}))]

A vectorized implementation is

=g(Xθ)J(θ)=1m(yTlog(h)(1y)Tlog(1h))

Advanced Optimization


"Conjugate gradient", "BFGS", and "L-BFGS" are more sophisticated, faster ways to optimize θ that can be used instead of gradient descent. We suggest that you should not write these more sophisticated algorithms yourself (unless you are an expert in numerical computing) but use the libraries instead, as they're already tested and highly optimized. Octave provides them.


















We first need to provide a function that evaluates the following two functions for a given input value θ:
We can write a single function that returns both of these:
J(θ)θjJ(θ)
We can write a single function that returns both of these:
function [jVal, gradient] = costFunction(theta)
 jVal = [...code to compute J(theta)...];
 gradient = [...code to compute derivative of J(theta)...];
 end

Then we can use octave's "fminunc()" optimization algorithm along with the "optimset()" function that creates an object containing the options we want to send to "fminunc()"

options = optimset('GradObj', 'on', 'MaxIter', 100);
initialTheta = zeros(2,1);
 [optTheta, functionVal, exitFlag] = fminunc(@costFunction, initialTheta, options);



Multiclass Classification: One-vs-all

Now we will approach the classification of data when we have more than two categories. Instead of y = {0,1} we will expand our definition so that y = {0,1...n}.
Since y = {0,1...n}, we divide our problem into n+1 (+1 because the index starts at 0) binary classification problems; in each one, we predict the probability that 'y' is a member of one of our classes.
y{0,1...n}h(0)θ(x)=P(y=0|x;θ)h(1)θ(x)=P(y=1|x;θ)h(n)θ(x)=P(y=n|x;θ)prediction=maxi(h(i)θ(x))

        


 
 we basically choosing one class and then lumping all the others into a single second class. We do this repeatedly, applying binary logistic regression to each case, and then use the hypothesis that returned the highest value as our prediction.
The following image shows how one could classify 3 classes:










Gradient Descent

Gradient descent
Remember that the general form of gradient descent is:
Repeat{θj:=θjαθjJ(θ)}
We can work out the derivative part using calculus to get:
Repeat{θj:=θjαmi=1m(hθ(x(i))y(i))x(i)j}
Notice that this algorithm is identical to the one we used in linear regression. We still have to simultaneously update all values in theta.
A vectorized implementation is:
θ:=θmαXT(g(Xθ)y)
                                     EXAMPLES(Gradient descent
 Q   
Question: Derive the gradient descent training rule assuming that the target
function representation is:
od = w0 + w1x1 + … + wnxn.
Define explicitly the cost/error function E, assuming that a set of training
examples D is provided, where each training example d ∈ D is associated with
the target output td

 Solution: 🌝
The error function: E = ∑d∈D (td – od)2
The gradient decent algorithm: Δwi = –α (∂E / ∂wi)
First represent (∂E / ∂wi) in terms of the unit inputs xid, outputs od, and target
values td:
(∂E / ∂wi) = (∂∑d∈D (td – od)2/ ∂wi = ∑d∈D 2(td – od) (∂(td – od) / ∂wi) =
∑d∈D 2(td – od) (–∂od / ∂wi) = –∑d∈D 2(td – od) (∂(w0 + … + wixid + … + wnxnd) /∂wi) 
= –∑d∈D 2(td – od) (xid)
=> Δwi = α ∑d∈D 2(td – od) xid 🌝🌝🌝
 Question: Prove that the LMS training rule performs a gradient descent to
    minimize the cost/error function E defined in (2).
Solution:🌝 
Given the target function representation
od = w0 + w1x1 + … + wnxn,
LMS training rule is a learning algorithm for choosing the set of weights wi to
best fit the set of training examples {< d, td >}, i.e., to minimize the squared
error E ≡ ∑d∈D (td – od)2
.
LMS training rule works as follows:
(∀ < d, td >) use the current weights wi to calculate od
(∀wi) wi ← wi + η(td – od)xid (*)
From (2) à (∂E / ∂wi) = –∑d∈D 2(td – od)xid à –(1/2xid)(∂E / ∂wi) = (td – od)
Substitute this in (*) à (∀wi) wi ← wi + (η/2)(–∂E / ∂wi)
This shows that LMS alters weights in the very same proportion as does the
gradient descent algorithm (i.e., –∂E / ∂wi), proving that LMS performs gradient
descent.🌝🌝🌝



Question: Suppose that we want to build a neural network that classifies two

dimensional data (i.e., X = [x1, x2]) into two classes: diamonds and crosses. We

have a set of training data that is plotted as follows:


🌝A solution is a multilayer FFNN with 2 inputs, one hidden layer with 4 neurons
and 1 output layer with 1 neuron. The network should be fully connected, that is
there should be connections between all nodes in one layer with all the nodes in
the previous (and next) layer. We have to use two inputs because the input data
is two dimensional. We use an output layer with one neuron because we have 2
classes. One hidden layer is enough because there is a single compact region
that contains the data from the crosses-class and does not contain data from the
diamonds-class. This region can have 4 lines as borders, therefore it suffices if
there are 4 neurons at the hidden layer. The 4 neurons in the hidden layer
describe 4 separating lines and the neuron at the output layer describes the
square that is contained between these 4 lines.

🌝🌝🌝


🙏🙏Please hit the 🔔Bell icon  to get the notrification fromAbhinav's Blog



No comments:

Post a Comment