Parameter Learning and Structure Learning
The learning of probabilistic graphical models involves two distinguished tasks: parameter learning and structure learning. Parameter learning is the easiest task. Given a model structure (i.e., a directed or undirected graph, in the case of Bayesian or Markov networks; a set of first order logic clauses in the case of Markov logic; etc.), the task is to fit the model to the data, i.e., determine the parameter settings for which an optimal fit is obtained. Structure learning is more difficult. It involves determining an optimal model structure (i.e., determining the optimal graph structure, determining the optimal set of clauses, etc.). This in itself often involves a search through the model space, where each model is individually evaluated by fitting it to the data (i.e., via parameter learning) and measuring how good the fit is. However, the structure learning step may exploit additional background knowledge that the user has about the likely structure of the model.
Parameter learning is a relatively standard task by now. Structure learning, on the other hand, needs to be implemented differently depending on the formalism that is being used; for instance, since the syntax of CP-logic programs is quite different from that of Markov logic networks, quite different structure learning approaches are required. For Markov networks, Richardson and Domingos  show how the structure can be determined by making use of the ILP system Claudien  as an auxiliary system. Meert and Blockeel  show how the structure of acyclic CP-logic programs can be learned by turning them into equivalent Bayesian networks that contain one latent variable per CP-logic rule and the structure of which is constrained in a particular way; they then show how standard techniques for learning the structure of Bayesian networks can be adapted to ensure the resulting networks obey these structural constraints. Structure learning methods have been proposed for many other formalisms as well [24, 20].