A bias/variance decomposition for models using collective inference

Publication Date

2008

Journal or Book Title

MACHINE LEARNING

Abstract

Bias/variance analysis is a useful tool for investigating the performance of machine learning algorithms. Conventional analysis decomposes loss into errors due to aspects of the learning process, but in relational domains, the inference process used for prediction introduces an additional source of error. Collective inference techniques introduce additional error, both through the use of approximate inference algorithms and through variation in the availability of test-set information. To date, the impact of inference error on model performance has not been investigated. We propose a new bias/variance framework that decomposes loss into errors due to both the learning and inference processes. We evaluate the performance of three relational models on both synthetic and real-world datasets and show that (1) inference can be a significant source of error, and (2) the models exhibit different types of errors as data characteristics are varied.

DOI

https://doi.org/10.1007/s10994-008-5066-6

Pages

87-106

Volume

73

Issue

1

This document is currently not available here.

Share

COinS