- Published on
ML Refresher: Probability Theory, Maximum Likelihood Estimate and Maximum A Posterior
Overview
Machine Learning Refresher is a series of articles that records my journey of relearning the fundamentals of Machine Learning.
This article is a compiled version of
- Section 1.2 Probability Theory of the "Pattern Recognition and Machine Learning" book.
- MSBD5012 Machine Learning course materials
- Other online resources
- My own understandings
I added two things here to enhance my understanding:
- Emphasis on probability vs. probability distribution.
- Python code to generate an example.
Table of Contents
Sample Space and Event of a Random Experiment
A Random Experiment
is a process with uncertain outcomes. A set of all the possible outcomes of the random experiment is called the Sample Space
.
- If rolling one dice is the random experiment, the sample space .
A Random Variable
is a function that takes a sample space and maps it to a new sample space:
- The easiest one is the identical (1-to-1) function, let the
Random Variable
be the outcomes (the sample space) of rolling one dice (the random experiment). takes values , as well.
A Event
is a subset of a sample space :
- describes an
Event
subset , and it describes we got a value from the random experiment of rolling one dice. - describes an
Event
subset , and it describes we got a odd number value from the random experiment of rolling one dice.
A Composed Experiment
describes the repetitions of the same random experiment, and each repetition can be called a trail
.
Probability and Probability Distribution
Let's consider a random variable with discrete outcomes .
denotes the probability
of an event of being . For example, if is the smartphone model we found in a random experiment, is the probability of finding an IPhoneSE smartphone in the random experiment.
Taking all outcomes into account, denotes the probability distribution
of :
- Probability Mass Function (Histogram) for discrete outcomes.
- Probability Density Function for continuous outcomes.
Joint Probability
Let's say there are two random variables in a random experiment.
- has M outcomes .
- has L outcomes .
Then each observation in the random experiment is a pair of events . denotes the joint probability
of two events, and , occurring at the same time.
For example, if and are the smartphone model and the gender we found in an random experiment, respectively, indicates the probability of finding a male with IPhone SE in the random experiment.
After obtaining the joint probability of each pair, we can get the joint probability distribution
. To illustrate this idea, the following code generates 1000 pairs (xs
and ys
):
# Take 1000 samples between [1, 5)
xs = [np.random.randint(1, 5) for _ in range(1000)]
# Take 1000 samples between [1, 3)
ys = [np.random.randint(1, 3) for _ in range(1000)]
# Count the (x, y) pairs and convert them to a dataframe
joint_dist_df = pd.DataFrame.from_dict(Counter(zip(xs, ys)), orient="index").reset_index()
joint_dist_df.columns = ["Event", "P(X, Y)"]
joint_dist_df["X"] = [e[0] for e in joint_dist_df["Event"]]
joint_dist_df["Y"] = [e[1] for e in joint_dist_df["Event"]]
# Compute the joint probability
joint_dist_df["P(X, Y)"] /= joint_dist_df["P(X, Y)"].sum()
joint_dist_df
The above code generates the following dataframe:

Marginal Probability
Based on the observations of we can calculate the marginal probability
by marginalizing :
For example, if and are the smartphone model and the gender we found in a random experiment, respectively. We can compute by adding up the corresponding joint probabilites for each gender.
Applying this to all 's outcomes we get the marginal probability distribution
:
This is called the sum rule of probability theory
. The following code shows how to compute the marginal probability distribution:
marginal_x_dist_df = joint_dist_df.groupby("X")["P(X, Y)"].sum().reset_index()
marginal_x_dist_df.columns = ["X", "P(X)"]
marginal_x_dist_df
The above code generates the following dataframe:

Conditional Probability
If we filter the observations by a particular outcome (e.g., ), we can calculate the probabilities of observing given the filtered observations . It is called the conditional probability
The following code shows how to compute the conditional probability distribution
:
for x_i in range(1, 5):
print("For x_i =", x_i)
# Filter obseravtions based on x_i
_df = joint_dist_df[joint_dist_df["X"]==x_i].copy()
_df = pd.merge(_df, marginal_x_dist_df, how="left", on="X")
# Calculate the conditional probabilites
_sum = _df["P(X, Y)"].sum()
_df["P(Y|X)"] = _df["P(X, Y)"] / _sum
display(_df)
The above code generates the following dataframe:

From the result we can also observe that the conditional probability can be calculated with:
And the conditional probability distribution can be written as:
And, is called the product rule of probability theory
.
Independence
There is a special case for the product rule
above. If two events and are independent, Knowing in advance won't change the probability of having ; therefore:
- The conditional probability becomes
- The conditional probability distribution becomes
Applying this new information to the product rule
above we will get:
if and are independent.
Test of Independence
# Take 1000 samples between [1, 5)
xs = [np.random.randint(1, 5) for _ in range(1000)]
# Take 1000 samples between [1, 3)
ys = [np.random.randint(1, 3) for _ in range(1000)]
In the Python coding example, we generated 1000 pairs (xs
and ys
) uniformly and independently. But why we are not getting a perfect result of ?
We can find the root cause by looking at the marginal probabilities of and . Ideally we should see and , but it is not the case.
We can use the Chi-squared test to test if two categorical values are independent.
Chi-squared test tests if two categorical variables are dependent on each other or not.
- The null hypothesis: and are independent.
- The alternative hypothesis: and are dependent.
from sklearn.feature_selection import chi2
# The null hypothesis is that they are independent.
# P <= 0.05: Reject the null hypothesis.
# P > 0.05: Accept the null hypothesis.
chi2(np.array(xs).reshape(-1, 1), np.array(ys).reshape(-1, 1))
# > (array([0.88852322]), array([0.34587782]))
The test returns a P-value of 0.346; therefore, we cannot reject the null hypothesis that and are independent. So there is a problem in the process above: the probabilities obtained from counting observations are not entirely accurate.
Frequentist Probability and Bayesian Probability
The entire discussion above is based on counting observations. It is called the Frequentist Probability. The Frequentist Probability requires many repetitions of a random experiment to have an accurate probability distribution. In the code example, we only sampled 1000 pairs (xs
and ys
), which is too small. If we increase the sample size, we can see the marginal probabilities will get closer to the ideal values.
However, it is hard to produce the exact even with a very large number of trials. In this case, since we know xs
and ys
are generated from two independent processes, we can set and manually instead. This is a perspective of Bayesian Probability. In Bayes' Theorem, is the prior probability, which is set by humans.
In other words, based on the human knowledge, we can decide if we want to use the , from counting observations or use the following theorical values:
- Set to be a uniform distribution with
- Set to be a uniform distribution with
In general, prior probability (human knowledge) is better for a small dataset because Frequentist Probability requires a large dataset. However, one clear disadvantage of Bayesian Probability is that based on personal belief, it could be wrong that I assumed xs
and ys
come from two independent processes. A bad prior probability distribution setup is a problem for Bayesian Probability.
Bayes' theorem
Recall the product rule
, since joint probability distribution is symmetrical , we can deduce the Bayes' theorem
like this:
- is the event we want to analyze and is the event we takes as an evidence to support the occurance of .
- is the prior probability of without knowing as an evidence. It is the best guess of the probability of occurring without new information.
- is the prior probability of finding the evidence. It normalize the likelihood .
- is the likelihood. it can be written as .It is the probability of finding the evidence given the has already occured.
- is the posterior probability we want to calculate after seeing an evidence .
In-terms of probability, describes the probability of finding our target given the evidence . describes the probability of finding our target before knowing the evidence . If the new evidence is value-adding, we should see deviates from . In other words, the new evidence can update our degree of belief .
Therefore, can be understood as the support of the evidence for our target, since , . This shows the evidence is for .
TODOs:
- Example for Bayesian update.
- Dive deeper into Bayesian inference.
Estimating a Model's Parameter
TODO