Linked Open Data

The Linked Data Life-CycleThe Linked Data ParadigmResource Identification with IRIsDe-referencabilityRDF Data ModelRDF SerializationsIntegrating Heterogeneous Tools into the LOD2 StackDeployment Management Leveraging Debian PackagingExample of meta-packaging: OntoWiki.Data Integration Based on SPARQL, WebID and VocabulariesPackage Graph:Access Control Graph:Provenance Graph:REST Integration of User InterfacesConclusion and OutlookReferencesTechnology Advances in Large-Scale RDF Data ManagementGeneral ObjectivesVirtuoso Column StoreVectored ExecutionVector OptimizationsQuery OptimizationState of the RDF TaxVirtuoso Cluster ParallelPerformance DynamicsSubsequent DevelopmentBSBM Benchmark ResultsCluster ConfigurationBulk Loading RDFNotes on the BI WorkloadBenchmark ResultsEmergent SchemasStep1: Basic CS DiscoveryStep2: Dimension Tables DetectionStep3: Human-Friendly LabelsStep4: CS MergingStep5: Schema and Instance FilteringFinal Schema EvaluationConclusionReferencesKnowledge Base Creation, Enrichment and RepairLinked Data Creation and ExtractionDBpedia, a Large-Scale, Multilingual Knowledge Base Extracted from WikipediaRDFa, Microdata and Microformats Extraction FrameworkRozetaDictionary ManagementText Annotation and EnrichmentAnalysis, Enrichment and Repair of Linked Data with ORE ToolLogical DebuggingMotivationSupport in ORESchema EnrichmentMotivationSupport in OREConstraint Based ValidationMotivationSupport in OREOntology Repair with PatOMatLinked Data Quality Assessment with RDFUnitAnalysis of Link ValidityWeb Linkage ValidatorData Graph Summary ModelLink AnalysisProvenanceHow to Improve Your Dataset with the Web Linkage ValidatorBenchmarking Semantic Named Entity Recognition SystemsConclusionReferencesInterlinking and Knowledge FusionIntroductionVocabulary MappingThe Silk Link Discovery FrameworkSilk: Functionality and Main ConceptsThe GenLink AlgorithmThe ActiveGenLink AlgorithmData Cleansing and Reconciliation with LODRefineLODRefineUse CasesQuality Evaluation of Crowdsourcing ResultsData Quality Assessment and FusionQuality Assessment MetricsFusion FunctionsData Interlinking and Fusion for Asian LanguagesInterlinking Korean Resources in the Korean Alphabet: Korean Phoneme DistanceInterlinking Korean Resources in Korean and English: Korean Transliteration DistanceInterlinking Asian Resources in Chinese Alphabet: Han Edit DistanceAsian Data Fusion AssistantConclusionReferencesFacilitating the Exploration and Visualization of Linked DataIntroductionRsine Getting Notified on Linked Data ChangesRelated WorkApproachSubscribing for NotificationsStack IntegrationNotification ScenariosCubeViz – Exploration and Visualization of Statistical Linked DataThe RDF Data Cube VocabularyIntegrity AnalysisFaceted ExplorationGeneration of DialoguesInitial Pre-selectionChart VisualisationAPIsChart OptionsElement RecognitionInteractive LegendSharing ViewsFacete A Generic Spatial Facetted Browser for RDFUser InterfaceConceptsFaceted SearchFinding Connections between SPARQL ConceptsDisplay of Large Amounts of GeometriesRelated WorkConclusions and Future WorkReferencesSupporting the Linked Data Life Cycle Using an Integrated Tool StackIntroductionThe LOD2 Linked Data StackBuilding a Linked Data ApplicationBecoming LOD2 Linked Data Stack ComponentLOD2 Stack RepositoryInstalling the LOD2 Linked Data StackThe LOD2 Linked Data Stack ReleaseAvailable as Debian PackagesAvailable as Online ComponentAvailable Online Data SourcesThe LOD2 Stack Components Functional Areas CoverageA Customized Linked Data Stack for StatisticsApplication Architecture and ScenariosLOD2 Statistical Workbench in UseThe RDF Data Cube VocabularyExample 1: Quality Assessment of RDF Data CubesExample 2: Filtering, Visualization and Export of RDF Data CubesExample 3: Merging RDF Data CubesTowards a Broader AdoptionUse Case 1: Digital Agenda ScoreboardUse Case 2: Statistical Office of the Republic of Serbia (SORS)Use Case 3: Business Registers AgencyChallenges Faced by Early AdoptersConclusionReferencesUse Cases LOD2 for Media and PublishingIntroductionRationale for the Media and Publishing Use CaseWolters Kluwer Company ProfileData Transformation, Interlinking and EnrichmentEditorial Data Interfaces and Visualization ToolsBusiness Impact and Relevant Pre-conditions for SuccessProcessing DataTransformation from XML to RDFMetadata Management ProcessPebbles, a Metadata EditorNotification ServiceEnrichment of WKD DataVisualizationLicensing Semantic MetadataTraditional Protection Instruments for Intellectual PropertyLicensing Policies for Linked DataRights Expression Languages for Linked Data LicensesConclusionReferencesBuilding Enterprise Ready Applications Using Linked Open DataIntroductionThe Landscape of Enterprise and Corporate Data TodayWhy Should My Company Assets Go Linked Open Data?LOD Enterprise ArchitecturesLOD Enterprise Architecture with a Publishing WorkflowExampleLOD Enterprise Architecture IntegrationTransformation Pipeline to LOD Enterprise ArchitectureBest PracticesData Sources IdentificationModelling for the Specific DomainMigration of Legacy VocabulariesDefinition of the URI StrategyIdentification of Business Items as Resources Referenced by URIsUse HTTP/DNS Based URIsUse De-referenceable URIsSeparate Resource and Resource RepresentationDesign Cool URIsOpaque vs. Non opaque URIsPublishingPublishing Pattern for Relational DataPublishing Pattern for Excel/CSV DataPublishing Pattern for XML DataPublishing Pattern for Unstructured DataHosting and ServingInterlinking The Creation of 5-Star Business DataVocabulary MappingConclusionLifting Open Data Portals to the Data WebPublic Data and Data PortalsUsing PublicData.euData PublishingData ConsumptionSemantic Lifting of CSV to RDFLifting the Tabular DataTabular Data in PublicData.euUser-Driven Conversion FrameworkConversion ResultsStatistical Data in SerbiaRelevant StandardsWorking with Statistical Linked DataSerbian Statistical Office Use CaseMultidimensional Economy Data in PolandPolish Open Economy DataModelling Multidimensional Data with Data Cube VocabularySlices GenerationAggregationsExploration and Visualisation of Converted DataStatistical Data ExplorationGeospatial Data DiscoveryDrill-Down Choropleth MapsDrill-Down TablesConclusions and Future WorkReferencesLinked Open Data for Public ProcurementPublic Procurement DomainPublic Contracts OntologyOntologies Reused by the PCOCore Concepts of the PCOTendering Phase ModelingPre-realization, Realization and Evaluation Phase ModelingProcurement Data Extraction and Pre-processingData Extraction from HTMLData Extraction from Structured FormatsTED DataCzech DataPolish DataU.S. DataLOD-Enabled Public Contract MatchmakingPublic Contracts Filing ApplicationBuyer's and Supplier's ViewApplication ArchitectureMatchmaking Functionality InternalsAggregated Analysis of Procurement Linked DataAnalysis ScenariosAnalytical MethodsIntegration of Analytical Functionality into PCFAConclusionsReferences
Next >