Data integration problems are commonly viewed as interoperability issues, where the burden of reaching a common ground for exchanging data is distributed across the peers involved in the process. While apparently an effective approach towards standardization and interoperability, it poses a constraint to data providers who, for a variety of reasons, require backwards compatibility with proprietary or non-standard mechanisms. Publishing a holistic data API is one such use case, where a single peer performs most of the integration work in a many-to-one scenario. Incidentally, this is also the base setting of software compilers, whose operational model is comprised of phases that perform analysis, linkage and assembly of source code and generation of intermediate code. There are several analogies with a data integration process, more so with data that live in the Semantic Web, but what requirements would a data provider need to satisfy, for an integrator to be able to query and transform its data effectively, with no further enforcements on the provider? With this paper, we inquire into what practices and essential prerequisites could turn this intuition into a concrete and exploitable vision, within Linked Data and beyond.
[1]
Norman W. Paton,et al.
Pay-as-you-go data integration for linked data: opportunities, challenges and architectures
,
2012,
SWIM '12.
[2]
Dieter Maurer,et al.
Compiler Design
,
2013,
Springer Berlin Heidelberg.
[3]
Isabelle Augenstein,et al.
Mining Equivalent Relations from Linked Data
,
2013,
ACL.
[4]
Reinhard Wilhelm,et al.
Compiler Design
,
2012,
Springer Berlin Heidelberg.
[5]
M. d’Aquin,et al.
Extracting URI Patterns from SPARQL Endpoints
,
2014
.
[6]
Jason J. Jung,et al.
Retracted: Semantic Information Integration with Linked Data Mashups Approaches
,
2015
.
[7]
Aldo Gangemi,et al.
The foundations of virtual ontology networks
,
2013,
I-SEMANTICS '13.