In the last lecture we learned that if each variable $\mathsf{x}_i$ is conditionally independent of it's non-descendants given its parents, written as
$$ \begin{aligned}
\forall i, \mathsf{x}{i} \perp
\mathsf{x}{\mathrm{nd}(i)} \vert \mathsf{x}_{\mathrm{pa}(i)}
\end{aligned},
$$
then the joint distribution can be expressed in the factorized form as
$$ \begin{aligned}
p(x_1, x_2, x_3, \cdots, x_D)
=\prod_{i=1}^{D} p(x_i|x_{\mathrm{pa}(i)}).
\end{aligned} $$
Next, we will see a few examples.
Consider the directed model below to reason about the health information of a person. The model imagines that: (a) the flu causes sinus infections, (b) allergies cause sinus infections, (d) sinus infections causes runny noses, and (d) sinus infection causes headaches.
**Fig 1:** Flu graphical model
Consider another classic example of starting a car. Suppose your car does not start. Then, you try the radio, and it works. How would this information about the radio change your estimate of how likely it is that the gas tank is empty?
In the below model, you have different variables to represent what parts of car are functional. The model is helpful in reasoning about the questions, such as: What can we conclude about "GasInTank" given the car does not start and radio is working?
**Fig 2:** Car graphical model
In this example, we use a Naive Bayes model like shown in Fig 3.
**Fig 3:** Naive Bayes
In this model, we assume that given a class variable $c$ (discrete), the observed features $(x_1, x_2, x_3, \dots x_D)$ are conditionally independent, i.e.