Prior Probability: Examples and Calculations of Economic Theory

What Is Prior Probability?

Prior probability, in Bayesian statistics, is the probability of an event before new data is collected. This is the best rational assessment of the probability of an outcome based on the current knowledge before an experiment is performed.

Prior probability can be compared with posterior probability.

Key Takeaways

  • A prior probability, in Bayesian statistics, is the ex-ante likelihood of an event occurring before taking into consideration any new (posterior) information.
  • The posterior probability is calculated by updating the prior probability using Bayes' theorem.
  • In statistical terms, the prior probability is the basis for posterior probabilities.

Understanding Prior Probability

The prior probability of an event will be revised as new data or information becomes available, to produce a more accurate measure of a potential outcome. That revised probability becomes the posterior probability and is calculated using Bayes' theorem. In statistical terms, the posterior probability is the probability of event A occurring given that event B has occurred.

Example

For example, three acres of land have the labels A, B, and C. One acre has reserves of oil below its surface, while the other two do not. The prior probability of oil being found on acre C is one third, or 0.333. But if a drilling test is conducted on acre B, and the results indicate that no oil is present at the location, then the posterior probability of oil being found on acres A and C become 0.5, as each acre has one out of two chances.

Bayes' theorem is often applied to data mining and machine learning.

Bayes' Theorem

 P ( A B )   =   P ( A B ) P ( B )   =   P ( A )   ×   P ( B A ) P ( B ) where: P ( A )   =   the prior probability of  A  occurring P ( A B ) =   the conditional probability of  A    given that  B  occurs P ( B A )   =   the conditional probability of  B      given that  A  occurs \begin{aligned}&P(A\mid B)\ =\ \frac{P(A\cap B)}{P(B)}\ = \ \frac{P(A)\ \times\ P(B\mid A)}{P(B)}\\&\textbf{where:}\\&P(A)\ =\ \text{the prior probability of }A\text{ occurring}\\&P(A\mid B)=\ \text{the conditional probability of }A\\&\qquad\qquad\quad\ \text{ given that }B\text{ occurs}\\&P(B\mid A)\ = \ \text{the conditional probability of }B\\&\qquad\qquad\quad\ \ \text{ given that }A\text{ occurs}\\&P(B)\ =\ \text{the probability of }B\text{ occurring}\end{aligned} P(AB) = P(B)P(AB) = P(B)P(A) × P(BA)where:P(A) = the prior probability of A occurringP(AB)= the conditional probability of A  given that B occursP(BA) = the conditional probability of B   given that A occurs

If we are interested in the probability of an event of which we have prior observations; we call this the prior probability. We'll deem this event A, and its probability P(A). If there is a second event that affects P(A), which we'll call event B, then we want to know what the probability of A is given B has occurred. In probabilistic notation, this is P(A|B), and is known as posterior probability or revised probability. This is because it has occurred after the original event, hence the post in posterior. This is how Baye’s theorem uniquely allows us to update our previous beliefs with new information.

What Is the Difference Between Prior and Posterior Probability?

Prior probability represents what is originally believed before new evidence is introduced, and posterior probability takes this new information into account.

How Is Bayes' Theorem Used in Finance?

In finance, Bayes' theorem can be used to update a previous belief once new information is obtained. This can be applied to stock returns, observed volatility, and so on. Bayes' Theorem can also be used to rate the risk of lending money to potential borrowers by updating the likelihood of default based on past experience.

How Is Bayes' Theorem Used in Machine Learning?

Bayes Theorem provides a useful method for thinking about the relationship between a data set and a probability. It is therefore useful in fitting data and training algorithms, where these are able to update their posterior probabilities given each round of training.