Next: Sequential Covering Algorithm
Up: Learning Sets of Rules
Previous: Motivation
- learn one-rule, remove the data it covers, then iterate
- our one rule must have high accuracy but not necessarily
high-coverage (what does this do to the overfitting problem??)
- only throw out positive examples covered
- final rules sorted by accuracy over the *whole* training set
- widely used
- greedy search so no guarantees about smallest set or best set of
rules
- so each rule is learned on a different distribution of the
training set....isn't this a problem???
- definitely skewed to best *set of rules* not best *rules*
Patricia Jean Riddle
Wed Jun 23 13:06:34 NZST 1999