King Yuen Chiu, Anthony

Overview

This is my notes on:

  1. Lecture 1 Basic of Probability Theory, MSBD5012
  2. Chapter 1.2 Probability Theory, Pattern Recognition and Machine Learning
  3. And many other resources ...

Table of Contents

Sample Space and Event of a Random Experiment

A Random Experiment is a process with uncertain outcomes. A set of all the possible outcomes of the random experiment is called the Sample Space Ω\Omega.

  • If rolling one dice is the random experiment, the sample space Ω={1,2,3,4,5,6}\Omega = \{1, 2, 3, 4, 5, 6\}.

A Random Variable is a function that takes a sample space and maps it to a new sample space:

  • The easiest one is the identical (1-to-1) function, let the Random Variable XX be the outcomes (the sample space) of rolling one dice (the random experiment). XX takes values ΩX={1,2,3,4,5,6}\Omega_{X} = \{1, 2, 3, 4, 5, 6\}, as well.

A Event is a subset of a sample space Ω\Omega:

  • (X=2)(X=2) describes an Event subset ΩX=2={2}\Omega_{X=2}=\{2\}, and it describes we got a value 22 from the random experiment of rolling one dice.
  • (X=Odd)(X=\mathbf{Odd}) describes an Event subset ΩX=Odd={1,3,5}\Omega_{X=\mathbf{Odd}}=\{1, 3, 5\}, and it describes we got a odd number value from the random experiment of rolling one dice.

A Composed Experiment describes the repetitions of the same random experiment, and each repetition can be called a trail.

Probability and Probability Distribution

Let's consider a random variable XX with MM discrete outcomes ΩX={x1,x2,...,xm,...xM}\Omega_{X}=\{x_1, x_2,..., x_m, ... x_M\}.

P(X=xm)P(X=x_m) denotes the probability of an event of XX being xmx_m. For example, if XX is the smartphone model we found in a random experiment, P(X=IPhoneSE)P(X=\mathbf{}IPhoneSE) is the probability of finding an IPhoneSE smartphone in the random experiment.

Taking all outcomes into account, P(X)P(X) denotes the probability distribution of XX:

  • Probability Mass Function (Histogram) for discrete outcomes.
  • Probability Density Function for continuous outcomes.

Joint Probability

Let's say there are two random variables (X,Y)(X, Y) in a random experiment.

  • XX has M outcomes ΩX={x1,x2,...,xm,...,xM}\Omega_{X} =\{x_1, x_2, ..., x_m, ..., x_M\}.
  • YY has L outcomes ΩY={y1,y2,...,yl,...,yL}\Omega_{Y} =\{y_1, y_2, ..., y_l, ..., y_L\}.

Then each observation in the random experiment is a pair of events (X=xm,Y=yl)(X=x_m, Y=y_l). P(X=xm,Y=yl)P(X=x_m, Y=y_l) denotes the joint probability of two events, (X=xm)(X=x_m) and (Y=yl)(Y=y_l), occurring at the same time.

For example, if XX and YY are the smartphone model and the gender we found in an random experiment, respectively, P(X=IPhoneSE,Y=Male)P(X=\mathbf{IPhoneSE}, Y=\mathbf{Male}) indicates the probability of finding a male with IPhone SE in the random experiment.

After obtaining the joint probability of each (X=xm,Y=yl)(X=x_m, Y=y_l) pair, we can get the joint probability distribution P(X,Y)P(X, Y). To illustrate this idea, the following code generates 1000 (X=xm,Y=yl)(X=x_m, Y=y_l) pairs (xs and ys):

# Take 1000 samples between [1, 5)
xs = [np.random.randint(1, 5) for _ in range(1000)]
# Take 1000 samples between [1, 3)
ys = [np.random.randint(1, 3) for _ in range(1000)]
# Count the (x, y) pairs and convert them to a dataframe
joint_dist_df = pd.DataFrame.from_dict(Counter(zip(xs, ys)), orient="index").reset_index()
joint_dist_df.columns = ["Event", "P(X, Y)"]
joint_dist_df["X"] = [e[0] for e in joint_dist_df["Event"]]
joint_dist_df["Y"] = [e[1] for e in joint_dist_df["Event"]]
# Compute the joint probability
joint_dist_df["P(X, Y)"] /= joint_dist_df["P(X, Y)"].sum()
joint_dist_df

The above code generates the following dataframe:

Joint Probability Distribution
Joint Probability Distribution P(X,Y)P(X,Y)

Marginal Probability

Based on the observations above of (X=xm,Y=yl)(X=x_m, Y=y_l) we can calculate the marginal probability P(X=xm)P(X=x_m) by marginalizing YY:

P(X=xm)=jP(X=xm,Y=yl)P(X=x_m) = \sum_{j} P(X=x_m, Y=y_l)

For example, if XX and YY are the smartphone model and the gender we found in a random experiment, respectively. We can compute P(X=IPhoneSE)P(X=\mathbf{IPhoneSE}) by adding up the corresponding joint probabilities for each gender.

P(X=IPhoneSE)=P(X=IPhoneSE,Y=Male)+P(X=IPhoneSE,Y=Female)P(X=\mathbf{IPhoneSE}) = P(X=\mathbf{IPhoneSE}, Y=\mathbf{Male}) + P(X=\mathbf{IPhoneSE}, Y=\mathbf{Female})

Applying this to all XX's outcomes we get the marginal probability distribution P(X)P(X):

P(X)=YP(X,Y)P(X) = \sum_{Y} P(X, Y)

This is called the sum rule of probability theory. The following code shows how to compute the marginal probability distribution:

marginal_x_dist_df = joint_dist_df.groupby("X")["P(X, Y)"].sum().reset_index()
marginal_x_dist_df.columns = ["X", "P(X)"]
marginal_x_dist_df

The above code generates the following dataframe:

Marginal Probability Distribution
Marginal Probability Distribution P(X)P(X)

Conditional Probability

If we filter the observations by a particular outcome (e.g., X=xmX=x_m), we can calculate the probabilities of observing Y=ylY=y_l given the filtered observations (X=xm)(X=x_m). It is called the conditional probability P(Y=ylX=xm)P(Y=y_l|X=x_m)

The following code shows how to compute the conditional probability distribution P(YX)P(Y|X):

for x_i in range(1, 5):
    print("For x_i =", x_i)
    # Filter obseravtions based on x_i
    _df = joint_dist_df[joint_dist_df["X"]==x_i].copy()
    _df = pd.merge(_df, marginal_x_dist_df, how="left", on="X")
    # Calculate the conditional probabilites
    _sum = _df["P(X, Y)"].sum()
    _df["P(Y|X)"] = _df["P(X, Y)"] / _sum
    display(_df)

The above code generates the following dataframe:

Conditional Probability Distribution
Conditional Probability Distribution P(YX)P(Y|X)

From the result we can also observe that the conditional probability P(Y=ylX=xm)P(Y=y_l|X=x_m) can be calculated with:

P(Y=ylX=xm)=P(X=xm,Y=yl)P(X=xm)P(Y=y_l|X=x_m) = \frac{P(X = x_m, Y = y_l)} {P(X = x_m)}

And the conditional probability distribution can be written as:

P(YX)=P(X,Y)P(X)P(Y|X) = \frac{P(X, Y)} {P(X)}

And, P(X,Y)=P(YX)P(X)P(X,Y)=P(Y|X)P(X) is called the product rule of probability theory.

Independence

There is a special case for the product rule above. If two events X=xmX=x_m and Y=ylY=y_l are independent, Knowing X=xmX=x_m in advance won't change the probability of having Y=ylY=y_l; therefore:

  • The conditional probability becomes P(Y=ylX=xm)=P(Y=yl)P(Y=y_l|X=x_m) = P(Y=y_l)
  • The conditional probability distribution becomes P(YX)=P(Y)P(Y|X) = P(Y)

Applying this new information to the product rule above we will get:

P(X,Y)=P(X)×P(Y)P(X, Y) = P(X) \times P(Y)

if X=xmX=x_m and Y=ylY=y_l are independent.

Test of Independence

# Take 1000 samples between [1, 5)
xs = [np.random.randint(1, 5) for _ in range(1000)]
# Take 1000 samples between [1, 3)
ys = [np.random.randint(1, 3) for _ in range(1000)]

In the Python coding example, we generated 1000 (X=xm,Y=yl)(X=x_m, Y=y_l) pairs (xs and ys) uniformly and independently. But why we are not getting a perfect result of P(X,Y)=P(X)×P(Y)P(X, Y) = P(X) \times P(Y)?

Let's find out the reason by looking at the marginal probabilities P(X)P(X) and P(Y)P(Y). Ideally we should see P(X=xm)=0.25P(X=x_m)=0.25 and P(Y=yl)=0.5P(Y=y_l)=0.5 from the code output, but it is not the case. Actually, if we increase the sample size, P(X)P(X) and P(Y)P(Y) will get closer to the ideal values.

So there is a problem with counting observations: the probabilities obtained from counting observations are not entirely accurate.

We can also use the Chi-squared test to test if two categorical values are independent. Chi-squared test tests if two categorical variables are dependent on each other or not.

  • The null hypothesis: (X=xm)(X=x_m) and (Y=yl)(Y=y_l) are independent.
  • The alternative hypothesis: (X=xm)(X=x_m) and (Y=yl)(Y=y_l) are dependent.
from sklearn.feature_selection import chi2
# The null hypothesis is that they are independent.
# P <= 0.05: Reject the null hypothesis.
# P > 0.05: Accept the null hypothesis.
chi2(np.array(xs).reshape(-1, 1), np.array(ys).reshape(-1, 1))
# > (array([0.88852322]), array([0.34587782]))

The test returns a P-value of 0.346; therefore, we cannot reject the null hypothesis that (X=xm)(X=x_m) and (Y=yl)(Y=y_l) are independent.

Bayes' theorem

Recall the product rule P(X=xm,Y=yl)=P(Y=ylX=xm)P(X=xm)P(X=x_m,Y=y_l)=P(Y=y_l|X=x_m)P(X=x_m), since joint probability distribution is symmetrical P(X=xm,Y=yl)=P(Y=yl,X=xm)P(X=x_m,Y=y_l) = P(Y=y_l,X=x_m), we can deduce the Bayes' theorem like this:

P(Y=ylX=xm)P(X=xm)=P(X=xmY=yl)P(Y=yl)P(Y=y_l|X=x_m)P(X=x_m) = P(X=x_m|Y=y_l)P(Y=y_l)
P(Y=ylX=xm)=P(X=xmY=yl)P(Y=yl)P(X=xm)P(Y=y_l|X=x_m) = \frac{P(X=x_m|Y=y_l)P(Y=y_l)}{P(X=x_m)}

Here,

  • P(Y=ylX=xm)P(Y=y_l|X=x_m) describes the probability of finding our target (Y=yl)(Y=y_l) given the Evidence (X=xm)(X=x_m). It is also called the Posterior Probability.
  • P(Y=yl)P(Y=y_l) describes the probability of finding our target (Y=yl)(Y=y_l) before knowing the Evidence (X=xm)(X=x_m). It is also called the Prior Probability.

If the new Evidence (X=xm)(X=x_m) is value-adding, we should see Posterior P(Y=ylX=xm)P(Y=y_l|X=x_m) deviates from the Prior P(Y=yl)P(Y=y_l). In other words, the new Evidence (X=xm)(X=x_m) can update our degree of belief.

Also:

P(X=xmY=yl)/P(X=xm)>1P(Y=ylX=xm)>P(Y=yl)P(X=x_m|Y=y_l)/P(X=x_m) > 1 \Rightarrow P(Y=y_l|X=x_m) > P(Y=y_l)

This shows the Evidence (X=xm)(X=x_m) is for (Y=yl)(Y=y_l). Therefore, P(X=xmY=yl)/P(X=xm)P(X=x_m|Y=y_l)/P(X=x_m) can be understood as the Support of the evidence for our target.

Expectation and Variance

Given a random variable XX that has MM outcomes ΩX={x1,x2,...,xm,...,xM}\Omega_{X} =\{x_1, x_2, ..., x_m, ..., x_M\} and a probability distribution P(X)P(X) over MM different outcomes. The Expectation of ff over xx is defined as:

Ex[f]=m=1Mf(xm)P(X=xm)E_{x}[f] = \sum_{m=1}^{M}f(x_m)P(X=x_m)

In most cases, we won't know the probability distribution P(X)P(X), but we can approximate the Expectation by observations. Let's say we have a dataset with NN observations {x1,x2,...,xn,...,xN}\{x_1, x_2, ..., x_n, ..., x_N\} that were drawn from the probability distribution PP with replacement. The Expectation of ff can be approximated by NN observations:

Ex[f]1Nn=1Nf(xn)E_{x}[f] \approx \frac{1}{N}\sum_{n=1}^{N}f(x_n)

The count of obseravtions with xn=xmx_n=x_m over the sample size NN is the approximate probability P(X=xm)P(X=x_m).

The Variance of ff over xx is defined as:

Varx[f]=Ex[(f(x)Ex[f(x)])2]Var_{x}[f] = E_{x}[(f(x)-E_{x}[f(x)])^2]

f(x)Ex[f(x)]f(x)-E_{x}[f(x)] is the deviation of f(x)f(x) from its Expectation. The Variance is the Expectation of the squared deviation.

Likelihood in Machine Learning

Let's we have a dataset with NN observations d={d1,d2...dN}\bold{d} = \{d_1,d_2...d_N\} from a distribution P(D)P(\bold{D}). We want to estimate a model with 2 parameters w={w0,w1}\bold{w}=\{w_0,w_1\} from a distribution P(W)P(\bold{W})

L(W=wD=d)=P(D=dW=w)L(\bold{W}=\bold{w}|\bold{D}=\bold{d}) = P(\bold{D}=\bold{d}|\bold{W}=\bold{w}) is the Likelihood of having the parameter w\bold{w} given the dataset d\bold{d}.

In a training process, we have a varing w\bold{w} for a given fixed d\bold{d}, therefore the Likelihood is a function of model parameter w\bold{w}. For example, given a training dataset d\bold{d} and 2 models wa\bold{w}_a and wb\bold{w}_b, wa\bold{w}_a is better if P(D=dW=wa)>P(D=dW=wb)P(\bold{D}=\bold{d}|\bold{W}=\bold{w}_a) > P(\bold{D}=\bold{d}|\bold{W}=\bold{w}_b)

Also, since P(D=dW=w)P(\bold{D}=\bold{d}|\bold{W}=\bold{w}) is a function of w\bold{w} (varying w\bold{w}) in this setting, it is not a probability distribution. P(D=dW=w)P(\bold{D}=\bold{d}|\bold{W}=\bold{w}) is a probability distribution only if it is a function of d\bold{d} (varying d\bold{d}).

Maximum Likelihood Estimation (MLE)

There are NN observations {d1,d2...dN}\{d_1,d_2...d_N\} in the dataset D=d\bold{D}=\bold{d}:

P(D=dW=w)=P(d1,d2...dNW=w)P(\bold{D}=\bold{d}|\bold{W}=\bold{w}) = P(d_1,d_2...d_N|\bold{W}=\bold{w})

Assume those samples are independent:

P(D=dW=w)=P(d1W=w)×P(d2W=w)...P(dNW=w)=i=1NP(xiW=w)P(\bold{D}=\bold{d}|\bold{W}=\bold{w}) = P(d_1|\bold{W}=\bold{w}) \times P(d_2|\bold{W}=\bold{w}) ... P(d_N|\bold{W}=\bold{w}) = \prod_{i=1}^{N} P(x_i|\bold{W}=\bold{w})

To find the best W=w\bold{W}=\bold{w}, we going to maximize the Likelihood:

w^=arg maxwWP(D=dW=w)=arg maxwWi=1NP(diW=w)\bold{\hat{w}} = \argmax_{\bold{w}\in\bold{W}}{P(\bold{D}=\bold{d}|\bold{W}=\bold{w})} = \argmax_{\bold{w}\in\bold{W}}{\prod_{i=1}^{N} P(d_i|\bold{W}=\bold{w})}

The objective is also equivalent to maximize the Log-likelihood:

w^=arg maxwWP(D=dW=w)=arg maxwWi=1NlnP(diW=w)\bold{\hat{w}} = \argmax_{\bold{w}\in\bold{W}}{P(\bold{D}=\bold{d}|\bold{W}=\bold{w})} = \argmax_{\bold{w}\in\bold{W}}\sum_{i=1}^{N}{\ln P(d_i|\bold{W}=\bold{w})}

Maximum a Posteriori Estimation (MAP)

In MLE, the Likelihood P(D=dW=w)P(\bold{D}=\bold{d}|\bold{W}=\bold{w}) is a function of w\bold{w} (varying w\bold{w}) in this setting, it is not a probability distribution. To have a function of w\bold{w} that is a probability distribution given D=d\bold{D}=\bold{d}, we need to use the Bayes' theorem to find the Posterior probability of the model parameter w\bold{w} is the one we want.

P(W=wD=d)=P(D=dW=w)P(W=w)P(D=d)P(\bold{W}=\bold{w}∣\bold{D}=\bold{d}) = \frac{P(\bold{D}=\bold{d}|\bold{W}=\bold{w})P(\bold{W}=\bold{w})}{P(\bold{D}=\bold{d})}

Since P(D=d)P(\bold{D}=\bold{d}) is positive, MAP becomes:

w^=arg maxwWP(W=wD=d)=arg maxwWP(D=dW=w)P(W=w)\bold{\hat{w}} = \argmax_{\bold{w}\in\bold{W}}{P(\bold{W}=\bold{w}|\bold{D}=\bold{d})} = \argmax_{\bold{w}\in\bold{W}}{P(\bold{D}=\bold{d}|\bold{W}=\bold{w})P(\bold{W}=\bold{w})}

The Prior P(W=w)P(\bold{W}=\bold{w}) can be considered as the external knowledge of the parameter. MAP is like an external knowledge regularized MLE.

For example, the external knowledge can come from an human expert, then under the MAP setting, a bad model parameter wbad{\bold{w}_{bad}} can be avoided by providing a low P(W=wbad)P(\bold{W}=\bold{w}_{bad}). An example of bad parameters can be: A model that relies heavily on the gender information for predicting credit default.

When Prior P(W=w)P(\bold{W}=\bold{w}) is a uniform distribution, MAP becomes MLE, because it doesn't matter what w\bold{w} is, the Prior probability is the same.