## What s the Bayes’ Theorem?

Bayes’ theorem—also known as Bayes’ rule and Bayes’ law—is an intuitive algorithm mathematicians use to calculate probability. It takes its name from the statistician Reverend Thomas Bayes, who published the first instance of the mathematical law in the eighteenth century.

The theorem works by taking into account the probability of events on their own as well as two events occurring in conjunction with each other. By calculating all this information together, you can get a better handle on what causes certain events to occur and why. You can use it as a means to predict future events as well.

## The Formula for the Bayes’ Theorem

When you take all the variables into account, the Bayes theorem formula looks like this:

P(A|B) = P(B/A)P(A) / P(B).

Your numerator is the probability of event B given event A multiplied by the probability of event A occurring on its own. You then divide this by the denominator of the probability of event B occurring on its own. Once you’re finished with these calculations, you’ll have the probability of event A given event B has already occurred.

## 4 Key Elements to Bayes’ Theorem

To put the Bayes’ rule into practice, you must include the essential elements. Use these four variables for your Bayesian forecasting:

1. **P(A)**: As you attempt to make statistical inferences, you need to start with the probability of one key event occurring. Thomas Bayes, the theorem’s namesake, called this P(A), or the probability of event A occurring. For instance, suppose you want to create a probability distribution about the efficacy of pregnancy tests. P(A) could stand for the likelihood of attaining a positive test result in general.

2. **P(B)**: Bayes’ theorem tests the joint or inverse possibility of two separate events. In addition to P(A), you’ll also need to track the probability of event B occurring. Suppose P(A) stands for positive pregnancy test results. P(B) could then stand in for the probability of attaining a true positive result rather than a false positive.

3. **P(A|B)**: By tracking the prior probability of an event, you’re also able to gain insight into the joint probability of other events occurring. P(A|B) stands for the probability of event A occurring given event B has already occurred. In other words, you would compare the true positive rate of event B against the probability of obtaining any kind of positive result (true or false) on a pregnancy test.

4. **P(B|A)**: To obtain posterior probability, reverse the prior variable and look at the probability of event B given event A already happened. This takes the prior knowledge you gained from P(A|B) and puts it in reverse. You would compare the probability of obtaining a true positive given the fact a person obtained a positive result in general in this example.

## Why is the Bayes’ Theorem Used?

Bayes’ theorem is a formula you can use to calculate the conditional probability of an event. When it comes to a definition of a conditional probability, think of the following: the probability of event A occurring given the fact event B has already occurred. In other words, Bayes’ theorem helps you pinpoint the probability of one thing happening if a related thing happens prior to it.

You can use Bayes’ theorem to test the probability of a hypothesis coming to fruition in an experiment or a situation. For even more precise predictions, you can utilize other elements of probability theory in conjunction with Bayesian inferences to determine how likely it is for certain events to occur.

## 3 Examples of Bayes’ Theorem in Action

Bayesian statistics has plenty of real-world applications. Consider these three Bayes’ theorem examples in action:

1. **Cancer risk**: Suppose you want to make an inference about a person’s likelihood of getting breast cancer. Through the application of Bayes’ theorem, you can compare medical test results and percentages against each other to come to a more concrete conclusion about how likely specific groups or individuals are to suffer from the ailment.

2. **Drug testing**: When it comes to drug testing, positive tests and negative results can be deceptive at times. Bayesian analysis allows you to compare false negative and false positive rates against the likelihood of obtaining a true negative or true positive result. This, in turn, can help drug testing manufacturers deduce how they can improve their tests when they study the circumstances most likely to cause false results.

3. **Machine learning**: Event A is as important as event B in Bayesian inference, and this is as true in machine learning as it is in any other arena. Artificial intelligence professionals train computers to utilize Bayesian inferences to comb through random variables and calculate the total probability of specific events occurring. As computers gain new evidence about probability, they become more sophisticated in their understanding of statistics.