CART (Breiman et al., Classification and Regression Trees, Chapman and Hall, New York, 1984) and (exhaustive) CHAID (Appl Stat 29:119–127, 1980) figure prominently among the procedures actually used in data based management, etc. CART is a well-established procedure that produces binary trees. CHAID, in contrast, admits multiple splittings, a feature that allows to exploit the splitting variable more extensively. On the other hand, that procedure depends on premises that are questionable in practical applications. This can be put down to the fact that CHAID relies on simultaneous Chi-Square- resp. F-tests. The null-distribution of the second test statistic, for instance, relies on the normality assumption that is not plausible in a data mining context. Moreover, none of these procedures – as implemented in SPSS, for instance – take ordinal dependent variables into account. In the paper we suggest an alternative tree-algorithm that: - Requires explanatory categorical variables - Chooses splitting attributes by means of predictive measures of association - Determines the cells to be united – respectively the number of splits – with the help of their conditional predictive power - Greedily searches for a part of the population that can be classified/scored rather precisely - Takes ordinal dependent variables into consideration

Dlugosz, Stephan und Ulrich Müller-Funk (2009), Predictive Classification Trees, in: Fink, A.; Lausen, B.; Seidel, W.; Ultsch, A. Advances in Data Analysis, Data Handling and Business Intelligence, Studies in Classification, Data Analysis, and Knowledge Organization, Springer 127--134.


Dlugosz, Stephan
Müller-Funk, Ulrich


Booty trees, Factor reduction, Ordinal measure of dispersion, Predictive measure of association