Why Should My Company Assets Go Linked Open Data?

The benefit of the adoption of Linked Data technologies in enterprises is multidimensional:

address the problem of data heterogeneity and integration within the business;

create value chains inside and across companies;

meaning on data enables search for relevant information;

increase value of existing data and create new insights using bi and predictive analytics techniques;

Linked Data is an add-on technology which means no need to change the existing infrastructure and models;

get a competitive advantage by being an earlier adaptor of LOD technologies.

These benefits are better detailed in Fig. 3 taken from Deloitte report “Open data: Driving growth, ingenuity and innovation”

[1].

LOD Enterprise Architectures

When adopting LOD principles, the Classical Enterprise it Architecture (Fig. 2) is enhanced for working over the Internet with means to overcome the technical barriers of the format and semantic differences of exchanged and manipulated data. This generates a data processing workflow that is described in the following three figures:

1. Figure 4 evolves the legacy or classic architecture by replacing the Enterprise Software Bus (esb) with Linked Open Data protocols for data published on an external server.

Fig. 3. Benefits for businesses to go LOD

2. Figure 5 evolves the legacy or classic architecture by replacing the Enterprise Software Bus with Linked Open Data protocols among the enterprise LOD publishing servers.

3. Figure 6 zooms-in on a publishing workflow, a transformation pipeline that is added on top of the legacy enterprise services (crm, erp, ... ). Some legacy systems may evolve and upgrade to include LOD publishing or they may provide feeds into the LOD publishing workflow.

LOD Enterprise Architecture with a Publishing Workflow

Figure 4 illustrates the LOD Enterprise architecture where the middleware framework (esb) of the Classical it architecture (Fig. 2) is replaced with the LOD cloud. This architecture shows two types of data publishing, with the enterprise putting their rdf data on an external LOD server (server 5 in Fig. 4) according to one of two scenarios:

1. An rdf data set is produced from various data sources and subsystems (box 1 in Fig. 4) and is transferred to an external central LOD server.

2. Metadata is added to a classic web site using semantic metadata annotations (e.g. rdfa, Schema.org) on the html pages (box 3 in Fig. 4). An LOD server

Fig. 4. LOD Enterprise Architecture

Fig. 5. LOD Enterprise Integration Architecture

Fig. 6. Transformation pipeline

extracts this metadata, organizes it and makes it accessible as a central service (box 4 in Fig. 4).

The Ontology Schema Server (box 2 in Fig. 4) hosts the used ontologies capturing the semantics of the Linked Open Data. It may be standard (preferred) or custom designed. Other application services or platforms (server 4 in Fig. 4) may use the central LOD services to build specific calculations and reports. Business niches or completely new business opportunities can be created with visualizations and aggregations of data.

Example

A head-hunter can crawl job postings and match with cvs. Aggregations of offered vacancies in real-estates can create new insights. Search engines may use the data for advanced searching while portals[2] can harvest data sets from other portals and publish them from a single point of access in a LOD server (server 5 in Fig. 4).

LOD Enterprise Architecture Integration

In the previous LOD Enterprise Architecture (Fig. 4 on p. 160) the business operations are described where Linked Data were produced or semantic annotations were made to the corporate content. The data produced (or extracted from crawling through websites) was not published by the enterprise but was made available to the community. Any external LOD server could be used for the publishing of the data, depending on the needs and requirements of the re-users.

In Fig. 5 the highlight is put on the operation of an enterprise that publishes its own data on a LOD server. Furthermore, the enabled integration is illustrated between various networks whether they belong to different branches of the same enterprise or entirely different companies. Figure 5 on p. 160 shows two company owned LOD publishing services (box 1 and 3 in Fig. 5). The published rdf is put on the company owned (corporate or enterprise) server platform. Other application services or platforms (server 4 in Fig. 5) may use the owned LOD services to build specific calculations and reports. Such application services may be on a dedicated external platform or they may be on one or more of the corporate owned LOD platforms/end-points. The Ontology Schema Server (box 2 in Fig. 5) hosts the used ontologies capturing the semantics of the Linked Open Data. It may be standard (preferred) or custom designed.

Transformation Pipeline to LOD Enterprise Architecture

The implementation of the previously described types of LOD architectures (shown in Figs. 4 and 5) is based on a transformation pipeline that is added on top of the legacy enterprise services (e.g. crm, erp, etc.). The pipeline includes:

1. Identification of the types of data which are available, i.e. separate data into public and private and define access security strategy, identify data sources, design retrieval procedures, setting data versions, provide data provenance;

2. Modelling with domain-specific vocabularies;

3. Designing the uri Strategy for accessing the information, i.e. how the model and associated data should be accessed;

4. Publishing the data which includes extraction as rdf, storage and querying;

5. Interlinking with other data.

  • [1] deloitte.com/assets/Dcom-UnitedKingdom/

    Local%20Assets/Documents/Market%20insights/Deloitte%20Analytics/ uk-insights-deloitte-analytics-open-data-june-2012.pdf

  • [2] Such as ODP: open-data.europa.eu/en/data/
 
< Prev   CONTENTS   Next >