Category Translation: Learning to Understand Information on the Internet

This paper investigates the problem of automatically learning declarative models of information sources available on the Internet. We report on ILA, a domain-independent program that learns the meaning of external information by explaining it in terms of internal categories. In our experiments, ILA starts with knowledge of local faculty members, and is able to learn models of the Internet service whois and of the personnel directories available at Berkeley, Brown, Caltech, Cornell, Rice, Rutgers, and UCI, averaging fewer than 40 queries per information source. ILA’s hypothesis language is first-order conjunctions, and its bias is compactly encoded as a determination. We analyze ILA’s sample complexity within the Valiant model, and using a probabilistic model specifically tailored to ILA.