The chapter outlined some of the promises and intricacies of using linked open data in the area of public procurement. It went through the diﬀerent, yet interrelated partial tasks: data extraction and publishing (leveraging on Public Contracts Ontology as domain-speciﬁc, externally interlinked vocabulary), buyer/supplier matchmaking, and aggregated analytics.
Despite the numerous technical diﬃculties, especially as regards the coverage and quality of existing open data sources, it is clear that handling procurement data in RDF format and linking them to other (government, geographic, commercial, encyclopedic, etc.) data opens novel avenues for their matchmaking and aggregate analytics. The use of common data format (RDF), as well as common domain vocabulary (PCO) and classiﬁcations (such as CPV and TERYT) allow for integration of external data; furthermore, as the data are separated from their initial applications, they can be consumed by third-party applications originally developed for matchmaking over other procurement datasets. The often implicit model of legacy data can also be compared with a carefully crafted ontological domain model and ambiguities can be discovered. Finally, the data itself potentially becomes cleaner during the extraction and transformation process, so, even if some of the analytic tools require it to be downgraded back to simpler formats such as CSV, its value may be higher than the initial one.
Future work in this ﬁeld will most likely concentrate on the last two tasks (matchmaking and analytics), however, with implication on extraction and publishing, too. Namely, precise matchmaking will require RDFization and publication of further information, some of which (such as detailed speciﬁcations of procured goods or services) will have to be extracted from free text. Exploitation of product ontologies such as those developed in the OPDM project could then be beneﬁcial. The analytic functionality should more systematically exploit external linked data as predictive/descriptive features . Given the size and heterogeneity of the LOD cloud, smart methods of incremental data crawling rather than plain SPARQL queries should however be employed.
Finally, while the current research has been focused on the primary intended users of the PCFA, i.e. contract authorities and (to lesser extent) bidders, the remaining stakeholders should not be forgotten. While the generic features of contracts, products/services and bidders, captured by the generalized features (such as price intervals, geographic regions or broad categories of products) in data mining results, are important for these parties, directly participant in the matchmaking process, there are also NGOs and supervisory bodies that primarily seek concrete corruption cases. To assist them, graph data mining methods should be adapted to the current state of linked data cloud, so as to detect, in particular, instances of suspicious patterns over the (RDF) graph representing organizations (contract authorities, bidders and others), contracts and people engaged in them.
Open Access. This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
-  ebusiness-unibw.org/ontologies/opdm/