Next: Gradient Ascent Training of
Up: Bayesian Learning
Previous: Inference
- If the network structure was given in advance and the variables
are fully observable, then just use the Naive Bayes formula modulo
only some of the variables are conditionally independent.
- If the network structure is given but only some of the variables
are observable, the problem is analogous to learn weights for the
hidden units in an ANN.
- Similarly, use a gradient ascent procedure to search through the
space of hypotheses that corresponds to all possible entries in the
conditional probability tables. The objective function that is
maximized is .
- By definition this corresponds to searching for the maximum
likelihood hypothesis for the table entries.
Patricia Riddle
Fri May 15 13:00:36 NZST 1998