Next: Bayes Theorem
Up: Bayesian Learning
Previous: Bayesian Learning
- Bayesian learning algorithms calculate explicit probabilities
for hypotheses
- naive Bayes classifier is among the most effective in
classifying test documents
- Bayesian methods can also be used to analyze other algorithms
- Training example incrementally increases or decreases the
estimated probability that a hypothesis is correct
- prior knowledge can be combined with observed data to determine
the final probability of a hypothesis
- prior knowledge is
- prior probability for each candidate
hypothesis and
- a probability distribution over observed data for
each possible hypothesis
- Bayesian methods accommodate hypotheses that make probabilistic
predictions ``this pneumonia patient has a 93% chance of complete
recovery''
- new instances can be classified by combining the predictions of
multiple hypotheses, weighted by their probabilities
- Even when computationally intractable, they can provide a
standard of optimal decision making against which other practical
measures can be measured
- Practical difficulties:
- require initial knowledge of many
probabilities - estimated based on background knowledge, previously
available data, assumptions about the form of the underlying
distributions
- significant computational cost to determine the Bayes
optimal hypothesis in the general case -
linear in the number of candidate hypotheses - in certain specialized
situations the cost can be significantly reduced
Patricia Riddle
Fri May 15 13:00:36 NZST 1998