Graph-based semi-supervised learning

Transductive learning was introduced by Vladimir Vapnik [VAP 98]. It was motivated by the fact that it is easier than inductive learning, given the fact that inductive learning tries to leam a general function to solve a specific problem, while transductive learning tries to learn a specific function for the problem at hand.

It consists of a set of labeled objects (xi,yi) (i = 1,2,. .., l), where xi e Rn are objects represented by real-valued attributes and yi e (1,2,..., m) are the possible labels of these objects. Together with the labeled objects, there is also a set of k unlabeled objects (xJ+1,..., xI+k). Rather than finding a general rule for classifying future examples, transductive learning aims at classifying only (the k ) unlabeled objects exploiting the information derived from labeled ones.

Within this framework, it is common to represent the geometry of the data as a weighted graph. For a detailed description of algorithms and applications on this field of research, named graph transduction, we refer to [ZHU 05]. The purpose of this method is to transfer the information given by labeled nodes to unlabeled ones, exploiting the graph structure. Formally, we have a graph G = (V, E, w) in which V is the set of nodes representing both labeled and unlabeled points, V = {v,vu}, E is the set of edges E с V x V connecting the nodes of the graph and w : e ^ R+ is a weight function assigning a similarity value to each edge e e E. The task of transduction learning is to estimate the labels of the unlabeled points, given the pairwise similarity among the data points and a set of possible labels

Ф = {l..^ c] .

 
Source
< Prev   CONTENTS   Source   Next >