Applications: Semantic Web technologies for software and systems engineering - Process Models
Stanford University
Stanford University
Guotong Xie
Guotong Xie
Guo Tong Xie
Guo Tong Xie
Dip. di Elettronica e Informazione, Politecnico di Milano
Dip. di Elettronica e Informazione, Politecnico di Milano
Department of Computer Science and Technology, Tsinghua University, Beijing, China
Department of Computer Science and Technology, Tsinghua University, Beijing, China
Ricardo Kawase
Ricardo Kawase
Raytheon
Raytheon
Dataset about iswc2009.
Tue May 03 19:01:32 CEST 2016
Management: Search and query 1
Marc Bron
Marc Bron
Chiara Ghidini
Chiara Ghidini
aca9efa3e5a984c2a05baeb389b092d70b0e76fa
Semantics for the Rest of Us
Leo Obrst
Leo Obrst
206849f07b39136bc7948f40da94c22ddb105c67
Management: Search and Query 2
Christoph Boehm
Christoph Boehm
a4f7afa285e6ab408a0534db3ef2e9fc9d358609
Krisztian Balog
Krisztian Balog
0a55725faf25cd4b1c2e912347090351c22b0a2e
Tim Clark
Tim Clark
Semantic Enhancement for Enterprise Data Management
Taking customer data as an example, the paper presents an approach to enhance the management of enterprise data by using Semantic Web technologies. Customer data is the most important kind of core business entity a company uses repeatedly across many business processes and systems, and customer data management (CDM) is becoming critical for enterprises because it keeps a single, complete and accurate record of customers across the enterprise. Existing CDM systems focus on integrating customer data from all customer-facing channels and front and back office systems through multiple interfaces, as well as publishing customer data to different applications. To make the effective use of the CDM system, this paper investigates semantic query and analysis over the integrated and centralized customer data, enabling automatic classification and relationship discovery. We have implemented these features over IBM Websphere Customer Center, and shown the prototype to our clients. We believe that our study and experiences are valuable for both Semantic Web community and data management community.
Semantic Enhancement for Enterprise Data Management
Parallel Materialization of the Finite RDFS Closure for Hundreds of Millions of Triples
Rahul Parundekar
Rahul Parundekar
Scalable Distributed Reasoning using MapReduce
North Carolina State University
North Carolina State University
A Generic Approach for Large-Scale Ontological Reasoning in the Presence of Access Restrictions to the Ontology's Axioms
A Generic Approach for Large-Scale Ontological Reasoning in the Presence of Access Restrictions to the Ontology's Axioms
The framework developed in this paper can deal with scenarios where selected sub-ontologies of a large ontology are offered as views to users, based on criteria like the user's access right, the trust level required by the application, or the level of detail requested by the user. Instead of materializing a large number of different sub-ontologies, we propose to keep just one ontology, but equip each axiom with a label from an appropriate labeling lattice. The access right, required trust level, etc. is then also represented by a label (called user label) from this lattice, and the corresponding sub-ontology is determined by comparing this label with the axiom labels. For large-scale ontologies, certain consequence (like the concept hierarchy) are often precomputed. Instead of precomputing these consequences for every possible sub-ontology, our approach computes just one label for each consequence such that a comparison of the user label with the consequence label determines whether the consequence follows from the corresponding sub-ontology or not. In this paper we determine under which restrictions on the user and axiom labels such consequence labels (called boundaries) always exist, describe different black-box approaches for computing boundaries, and present first experimental results that compare the efficiency of these approaches on large real-world ontologies. Black-box means that, rather than requiring modifications of existing reasoning procedures, these approaches can use such procedures directly as sub-procedures, which allows us to employ existing highly-optimized reasoners.
Jacopo Urbani
Jacopo Urbani
TrendLight Netherlands B.V.
TrendLight Netherlands B.V.
Michel Buffa
Michel Buffa
Supporting Multi-View User Ontology to Understand Company Value Chains
The objective of the Market Blended Insight (MBI) project is to develop web based techniques to improve the performance of UK Business to Business (B2B) marketing activities. The analysis of company value chains is a fundamental task within MBI because it is an important model for understanding the market place and the company interactions within it. The project has aggregated rich data profiles of 3.7 million companies that form the active UK business community. The profiles are augmented by Web extractions from heterogeneous sources to provide unparalleled business insight. Advances by the Semantic Web in knowledge representation and logic reasoning allow flexible integration of data from heterogeneous sources, transformation between different representations and reasoning about their meaning. The MBI project has identified that the market insight and analysis interests of different types of users are difficult to maintain using a single domain ontology. Therefore, the project has developed a technique to undertake a plurality of analyses of value chains by deploying a distributed multi-view ontology to capture different user views over the classification of companies and their various relationships.
Supporting Multi-View User Ontology to Understand Company Value Chains
Monash University
Monash University
A Weighted Approach for Partial Matching in Mobile Reasoning
A Weighted Approach for Partial Matching in Mobile Reasoning
Due to significant improvements in the capabilities of small devices such as PDAs and smart phones, these devices can not only consume but also provide Web Services. The dynamic nature of mobile environment means that users need accurate and fast approaches for service discovery. In order achieve high accuracy semantic languages can be used in conjunction with logic reasoners. Since powerful broker nodes are not always available (due to lack of long range connectivity), create a bottleneck (since mobile devices are all trying to access the same server) and single point of failure (in the case that a central server fails), on-board mobile reasoning must be supported. However, reasoners are notoriously resource intensive and do not scale to small devices. Therefore, in this paper we provide an efficient mobile reasoner which relaxes the current strict and complete matching approaches to support anytime reasoning. Our approach matches the most important request conditions (deemed by the user) first and provides a degree of match and confidence result to the user. We provide a prototype implementation and performance evaluation of our work.
Policy Aware Content Reuse on the Web
Decidable Order-Sorted Logic Programming for Ontologies and Rules with Argument Restructuring
Decidable Order-Sorted Logic Programming for Ontologies and Rules with Argument Restructuring
This paper presents a decidable fragment for combining ontologies andrules in order-sorted logic programming. We describe order-sortedlogic programming with sort, predicate, and meta-predicate hierarchiesfor deriving predicate and meta-predicate assertions. Meta-levelpredicates (predicates of predicates) are useful for representingrelationships between predicate formulas, and further, theyconceptually yield a hierarchy similar to the hierarchies of sorts andpredicates. By extending the order-sorted Horn-clause calculus, wedevelop a query-answering system that can answer queries such as atomsand meta-atoms generalized by containing predicate variables. We showthat the expressive query-answering system computes every generalizedquery in single exponential time, i.e., the complexity of our querysystem is equal to that of DATALOG.
University of Victoria
University of Victoria
University of Liverpool
University of Liverpool
Lightning talks
Semantics: Ontology modeling, reuse, extraction, and evolution
Hanyu Li
Hanyu Li
David Ratcliffe
David Ratcliffe
University of Chicago
University of Chicago
Grit Denker
Grit Denker
Julius Volz
Julius Volz
14983094b9b7e293a7a41d47df6d0f7e897bbb71
Semantics: Alignment
Semantic Web Applications in Scientific Discourse
Ignazio Palmisano
Ignazio Palmisano
72a0c52f7aa5355c1b6e92e9e93e684e87e12b7e
Wendy Hall
Wendy Hall
Blaz Fortuna
Blaz Fortuna
A Practical Approach for Scalable Conjunctive Query Answering on Acyclic EL+ Knowledge Base
Antje Schultz
Antje Schultz
Seungwoo Lee
Seungwoo Lee
d018403d8c8460f50ac9d6fd06a91b9b839c9ab0
Julia Kalloz
Julia Kalloz
Applications: Semantic Web technologies for Services
A Weighted Approach for Partial Matching in Mobile Reasoning
Jun Ma
Jun Ma
Decidable Order-Sorted Logic Programming for Ontologies and Rules with Argument Restructuring
Jose Alferes
Jose Alferes
90006b80834d12ce5b86b77881da8faaf5ce235e
Jianfeng Du
Jianfeng Du
4c5db8fb4f40fb2e1282b040f72f0f8cbac8a47e
Anna Averbakh
Anna Averbakh
Semantics: Reasoning with Rules or Modules
Evan Patton
Evan Patton
e2f988aab30d55fb07e05a08d3d376a708158e4c
A Generic Approach for Large-Scale Ontological Reasoning in the Presence of Access Restrictions to ...
University of Calabria, Italy
University of Calabria, Italy
University of Calabria
University of Calabria
Sören Auer
Sören Auer
09ac456515dee0896e8eba4b06ae589bef2069cf
Patrick Siehndel
Patrick Siehndel
Systems Ontological Use-CasesSystems Ontological Use-Cases
e-health
Systems Ontological Use-CasesSystems Ontological Use-Cases
mediation
ontology
use case
Using Hybrid Search and Query for E-discovery
Using Hybrid Search and Query for E-discovery
We investigated the use of a hybrid search and query for locating enterprise data relevant to a requesting partys legal case (e-discovery identification). We extended the query capabilities of SPARQL with search capabilities to provide integrated access to structured, semi-structured and unstructured data sources. Every data source in the enterprise is potentially within the scope of e-discovery identification. So we use some common enterprise structured data sources that provide product and organizational information to guide the search and restrict it to a manageable scale. We use hybrid search and query to conduct a rich high-level search, which identifies the key people and products to coarsely locate relevant data-sources. Furthermore the product and organizational data sources are also used to increase recall which is a key requirement for e-discovery Identification.
Marta Sabou
Marta Sabou
XLWrap - Querying and Integrating Arbitrary Spreadsheets with SPARQL
rdf
GeoSpatial and Moving Objects with RDF and AllegroGraph
geospatial
GeoSpatial and Moving Objects with RDF and AllegroGraph
IBM Software Group
IBM Software Group
Christine Robson
Christine Robson
78f94de266329310ab4e97869bb539999ac2c207
database
Bigdata®: Enabling the Semantic Web at Web‐Scale
scalability
Bigdata®: Enabling the Semantic Web at Web‐Scale
large-scale
Wei-Qi Wei
Wei-Qi Wei
University of Oxford
University of Oxford
Roberto García
Roberto García
University of Brighton
University of Brighton
Irene Celino
Irene Celino
a62cfa4877ae7d8b225f36c1afeeeec08a5316ae
Evren Sirin
Evren Sirin
Chris Callendar
Chris Callendar
Benjamin Grosof
Benjamin Grosof
Spyros Kotoulas
Spyros Kotoulas
Nikki Rogers
Nikki Rogers
Multi Visualisation and Dynamic Query for Effective Exploration of Semantic Data
Fraunhofer IGD
Fraunhofer IGD
Véronique Malaisé
Véronique Malaisé
Kewen Wang
Kewen Wang
Mohamed Gaber
Mohamed Gaber
RDF2RDFa: Turning RDF into Snippets for Copy-and-Paste
RDF2RDFa: Turning RDF into Snippets for Copy-and-Paste
In this demo and poster, we show a conceptual approach and an on-line tool that allows the use of RDFa for embedding non-trivial RDF models in the form of invisible div/span elements into existing Web content. This simplifies the publication of sophisticated RDF data, i.e. such that goes beyond simple property-value pairs, by broad audiences. Also, it empowers users with access limited to inserting XHTML snippets within Web-based authoring systems to add fully-fledged RDF and even OWL. Such is a frequent limitation for users of CMS systems or Wikis.
Amit Sheth
Amit Sheth
Yahoo! Research
Yahoo! Research
Yahoo Research
Yahoo Research
Mark Hefke
Mark Hefke
cd4065cb7a9e469643516e41a68ba58e39698934
Metadata Management Associates
Metadata Management Associates
Divya S
Divya S
XingZhi Sun
XingZhi Sun
Yelena Yesha
Yelena Yesha
Ivan Johgi
Ivan Johgi
Learning Semantic Query Suggestions
Learning Semantic Query Suggestions
An important application of semantic web technology is recognizing human-defined concepts in text. Query transformation is a strategy often used in search engines to derive queries that are able to return more useful search results than the original query and most popular search engines provide facilities that let users complete, specify, or reformulate their queries. We study the problem of semantic query suggestion, a special type of query transformation based on identifying semantic concepts contained in user queries. We use a feature-based approach in conjunction with supervised machine learning, augmenting term-based features with search history-based and concept-specific features. We apply our method to the task of linking queries from real-world query logs (the transaction logs of the Netherlands Institute for Sound and Vision) to the DBpedia knowledge base. We evaluate the utility of different machine learning algorithms, features, and feature types in identifying semantic concepts using a manually developed test bed and show significant improvements over an already high baseline. The resources developed for this paper, i.e., queries, human assessments, and extracted features, are available for download.
Yue Pan
Yue Pan
Tobias Ostheim
Tobias Ostheim
Heiko Paulheim
Heiko Paulheim
f3342e01711dd85fda53bc883b54a3fa465b0fd2
Descriptions,
Ontologies, Collaboration,
and
Governance
information management
HR
Descriptions,
Ontologies, Collaboration,
and
Governance
John Darlington
John Darlington
Alexandre Passant
Alexandre Passant
Collaborative Construction, Management and Linking of Structured Knowledge
Universitat de Lleida
Universitat de Lleida
RAPID: Enabling Scalable Ad-Hoc Analytics on the Semantic Web
RAPID: Enabling Scalable Ad-Hoc Analytics on the Semantic Web
As the amount of available RDF data continues to increase steadily, there is growing interest in developing efficient methods for analyzing such data. While recent efforts have focused on developing efficient methods for traditional data processing, analytical processing which typically involves more complex queries has received much less attention. The use of cost effective parallelization techniques such as Googles Map-Reduce offer significant promise for achieving Web scale analytics. However, currently available implementations are designed for simple data processing on structured data. In this paper, we present a language, RAPID, for scalable ad-hoc analytical processing of RDF data on Map-Reduce frameworks. It builds on Yahoos Pig Latin by introducing primitives based on a specialized join operator, the MD-join, for expressing analytical tasks in a manner that is more amenable to parallel processing, as well as primitives for coping with semi-structured nature of RDF data. Experimental evaluation results demonstrate significant performance improvements for analytical processing of RDF data over existing Map-Reduce based techniques
CSIRO ICT Centre, Australia
CSIRO ICT Centre, Australia
A RDF-base Normalized Model for Biomedical Lexical Grid
A RDF-base Normalized Model for Biomedical Lexical Grid
The Lexical Grid (LexGrid) project is an on-going community-driven initiative coordinated by the Mayo Clinic Division of Biomedical Statistics and Informatics. It provides a common terminology model to represent multiple vocabulary and ontology sources as well as a scalable and robust API for accessing such information. While successfully used and adopted in the biomedical and clinical community, LexGrid model now needs to be aligned with emerging Semantic Web standards and specifications. This paper introduces the LexRDF model, which maps the LexGrid model elements to corresponding constructs in W3C speci¯cations such as RDF, OWL, and SKOS. With LexRDF, the terminological information represent in LexGrid can be translated to RDF triples, and therefore allowing LexGrid to leverage standard tools and technologies such as SPARQL and RDF triple stores.
LinkedGeoData - Adding a Spatial Dimension to the Web of Data
In order to employ the Web as a medium for data and information integration, comprehensive datasets and vocabularies are required as they enable the disambiguation and alignment of other data and information. Many real-life information integration and aggregation tasks are impossible without comprehensive background knowledge related to spatial features of the ways, structures and landscapes surrounding us. In this paper we contribute to the generation of a spatial dimension for the Data Web by elaborating on how the collaboratively collected OpenStreetMap data can be transformed and represented adhering to the RDF data model. We describe how this data can be interlinked with other spatial data sets, how it can be made accessible for machines according to the linked data paradigm and for humans by means of a faceted geo-data browser.
LinkedGeoData - Adding a Spatial Dimension to the Web of Data
Harold R. Solbrig
Harold R. Solbrig
Exploiting Partial Information in Taxonomy Construction
One of the core services provided by OWL reasoners is classification: the discovery of all subclass relationships between class names occurring in an ontology. Discovering these relations can be computationally expensive, particularly if individual subsumption tests are costly or if the number of class names is large. We present a classification algorithm which exploits partial information about subclass relationships to reduce both the number of individual tests and the cost of working with large ontologies. We also describe techniques for extracting such partial information from existing reasoners. Empirical results from a prototypical implementation demonstrate substantial performance improvements compared to existing algorithms and implementations.
Exploiting Partial Information in Taxonomy Construction
Rob Shearer
Rob Shearer
60a65ecef9d429eed8c078ba2b839e75aed3f97f
Andrea Pugliese
Andrea Pugliese
064465e1551fc8893f9c9056af94f7593691bdf0
The National Center for Biomedical Ontology Annotator is an ontology-based web service for annotation of textual biomedical data with biomedical ontology concepts. The biomedical community can use the Annotator service to tag datasets automatically with concepts from more than 200 ontologies coming from the two most important set of biomedical ontology & terminology repositories: the UMLS Metathesaurus and NCBO BioPortal. Through annotation (or tagging) of datasets with ontology concepts, unstructured free-text data becomes structured and standardized. Such annotations contribute to create a biomedical semantic web that facilitates translational scientific discoveries by integrating annotated data.
NCBO Annotator: Semantic Annotation of Biomedical Data
NCBO Annotator: Semantic Annotation of Biomedical Data
Pyung Kim
Pyung Kim
V. S. Subrahmanian
V. S. Subrahmanian
Philippe Cudré-Mauroux
Philippe Cudré-Mauroux
Rensselaer Polytechnic Institute
RPI
Rensselaer Polytechnic Institute
RPI
Jeff Z. Pan
Jeff Z. Pan
Learning Semantic Query Suggestions
VenkatRam Yadav Jaltar
VenkatRam Yadav Jaltar
Andy Seaborne
Andy Seaborne
Edelweiss, INRIA
Edelweiss, INRIA
Xue Qiao Hou
Xue Qiao Hou
Hanmin Jung
Hanmin Jung
Bernhard Steffen
Bernhard Steffen
Gerben de Vries
Gerben de Vries
Ugur Kuter
Ugur Kuter
Akshay Bhat
Akshay Bhat
8b35d427074f3d5293b9615bad90a87c1ba5c2d0
SensorMasher: Enabling open linked data in sensor data mashup
SensorMasher: Enabling open linked data in sensor data mashup
In this demo, we demonstrate a plaform which makes sensor data available following the linked open data principle and enables the seamless integration of such data into mashups. SensorMasher publishes sensor data as Web data sources which can then easily be integrated with other (linked) data sources and sensor data. Raw sensor readings and sensors can be semantically described and annotated by the user. These descriptions can then be exploited in mashups and in linked open data scenarios and enable the discovery and integration of sensors and sensor data at large scale. The user-generated mashups of sensor data and linked open data can in turn be published as linked open data sources and be used by others
Social Data on the Web
Abraham Bernstein
Abraham Bernstein
Olaf Hartig
Olaf Hartig
9c09772d208636b590bf7b41d9d1976b80f6b335
Analogy Engines for the Semantic Web
Analogy Engines for the Semantic Web
We propose a new utility for Semantic Web called as Analogy Engine. Analogy engine employs an example based search approach for retrieving the most similar URIs for the given URI by comparing number of shared links. The Analogy engine is based on Analogy Space, which uses Singular Value Decomposition on matrix representation of a Semantic Network. However Analogy Space faces difficulty with networks having more than a few thousand nodes. We present our preliminary work on scaling Analogy Space by dividing the network into multiple communities, and creating separate Analogy Space for each community. We show that this procedure results in significant improvements and can be used for a large scale network such as the Semantic Web.
CENTRIA
CENTRIA
In this paper we present a method and an implementation for creating and processing semantic events from interaction with Web pages which opens possibilities to build event-driven applications for the (Semantic) Web. Events, simple or complex, are models for things that happen e.g., when a user interacts with a Web page. Events are consumed in some meaningful way e.g., for monitoring reasons or to trigger actions such as responses. In order for receiving parties to understand events e.g., comprehend what has led to an event, we propose a general event schema using RDFS. In this schema we cover the composition of complex events and event-to-event relationships. These events can then be used to route semantic information about an occurrence to different recipients helping in making the Semantic Web active. Additionally, we present an architecture for detecting and composing events in Web clients. For the contents of events we show a way of how they are enriched with semantic information about the context in which they occurred. The paper is presented in conjunction with the use case of Semantic Advertising, which extends traditional clickstream analysis by introducing semantic short-term profiling, enabling discovery of the current interest of a Web user and therefore supporting advertisement providers in responding with more relevant advertisements.
Lifting events in RDF from interactions with annotated Web pages
Lifting events in RDF from interactions with annotated Web pages
Jose Iria
Jose Iria
School of Computing and Information Systems, Athabasca University, Canada
School of Computing and Information Systems, Athabasca University, Canada
Philipp Kärger
Philipp Kärger
2d0bc16cce5be5046541100fdb8f54d4e1703a22
University of Aberdeen
University of Aberdeen
Department of Computing Science, The University of Aberdeen
Department of Computing Science, The University of Aberdeen
Northeastern University
Northeastern University
SILK is a new knowledge representation (KR) language and system that integrates and extends recent theoretical and implementation advances in semantic rules and ontologies. It addresses fundamental requirements for scaling the Se- mantic Web to large knowledge bases in science and busi- ness that answer questions, proactively supply info, and rea- son powerfully. SILK radically extends the KR power of W3C OWL RL, SPARQL, and RIF-BLD, as well as of SQL and production rules. It includes defaults (cf. Courteous LP), higher-order features (cf. HiLog), frame syntax (cf. F-Logic), external actions (cf. production rules), and sound interchange with the main existing forms of knowledge/data in the Semantic Web and deep Web. These features cope with knowledge quality and context, provide flexible meta- reasoning, and activate knowledge.
The SILK System: Scalable and Expressive Semantic Rules
The SILK System: Scalable and Expressive Semantic Rules
Universität der Bundeswehr München
Universität der Bundeswehr München
Bridging the Gap Between Linked Data and the Semantic Desktop
Bridging the Gap Between Linked Data and the Semantic Desktop
The exponential growth of the World Wide Web in the last decade brought an explosion in the information space, which has important consequences also in the area of scientific research. Finding relevant work in a particular field and exploring the links between publications is currently a cumbersome task. Similarly, on the desktop, managing the publications acquired over time can represent a real challenge. Extracting semantic metadata, exploring the linked data cloud and using the semantic desktop for managing personal information represent, in part, solutions for different aspects of the above mentioned issues. In this paper, we propose an innovative approach for bridging these three directions with the overall goal of alleviating the information overload problem burdening early stage researchers. Our application combines harmoniously document engineering-oriented automatic metadata extraction with information expansion and visualization based on linked data, while the resulting documents can be seamlessly integrated into the semantic desktop.
Modeling and Query Patterns for Process Retrieval in OWL
Modeling and Query Patterns for Process Retrieval in OWL
Process modeling is a core task in software engineering in general and in web service modeling in particular. The explicit management of process models for purposes such as process selection and/or process reuse requires flexible and intelligent retrieval of process structures based on process entities and relationships, i.e. process activities, hierarchical relationship between activities and their parts, temporal relationships between activities, conditions on process flows as well as the modeling of domain knowledge. In this paper, we analyze requirements for modeling and querying of process models and present a pattern-oriented approach exploiting OWL-DL representation and reasoning capabilities for expressive process modeling and retrieval.
Yuan Ni
Yuan Ni
df44250c64560e6c81b9648543a57997260beae2
Jennifer Golbeck
Jennifer Golbeck
007da7d143698299a48e0e578011ec5404ffd5f1
Daniel Bingel
Daniel Bingel
Institute for Scientific Interchange (ISI) Foundation
Institute for Scientific Interchange (ISI) Foundation
Danh Le Phuoc
Danh Le Phuoc
a8c8eef405e7e48f5bd4b664df9db19048b025eb
Johns Hopkins University
Johns Hopkins University
Luciano Serafini
Luciano Serafini
Sarah Magidson
Sarah Magidson
Ulrike Sattler
Ulrike Sattler
Collaborative climate change research on the semantic web
WikiEarth (http://www.wikiearth.net) is a website designed for encouraging collaboration between researchers across the academic spectrum, and also serves as a test case to determine the limitations and benefits of using an ontological data structure to manage the input of natural science based data from around the world. Drawing upon Wikipedia's model of massive user collaboration, WikiEarth's motivation is to extend beyond this by formalizing the relationships between the data being entered. A semantic ontology is a natural candidate for data representation for three reasons: first, the hierarchical class structure of an OWL-Ontology helps avoid redundancy when developing simulations, as an operation can be applied to a class and all its subclasses; secondly, a framework like Jena helps eliminate human error and reduce the amount of data entry that needs to be performed; and finally, important restrictions regarding data entry are imposed by the ontological structure and Jena, as opposed to by a proprietary system developed for one specific application. Utilizing this infrastructure, a WikiEarth Climate Demonstration was successfully conceptualized, constructed, deployed and subsequently unveiled at the 2009 World Student Environmental Summit. The success of this application demonstrates that ontologies could be effectively purposed for a high-traffic production system.
Collaborative climate change research on the semantic web
Edith Schonberg
Edith Schonberg
Oxford University Computing Laboratory
Oxford University Computing Laboratory
Richard Cyganiak
Richard Cyganiak
39f3c9b7479a83c76596a7c92b61f76dee3f5343
TripleRank: Ranking Semantic Web Data By Tensor Decomposition
The Semantic Web fosters novel applications targeting a more efficient and satisfying exploitation of the data available on the web, e.g. faceted browsing of linked open data. Large amounts and high diversity of knowledge in the Semantic Web pose the challenging question of appropriate relevance ranking for producing fine-grained and rich descriptions of the available data, e.g. to guide the user along most promising knowledge aspects. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between resources and predicates (i.e. support for link semantics in the linked data model). In this paper, we present TripleRank, a novel approach for faceted authority ranking in the context of RDF knowledge bases. TripleRank captures the additional latent semantics of Semantic Web data by means of statistical methods in order to produce richer descriptions of the available data. We model the Semantic Web by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic links. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to Web authority ranking with HITS. The result are groupings of resources and predicates that characterize their authority and navigational (hub) properties with respect to identified topics. We have applied TripleRank to multiple data sets from the linked open data community and gathered encouraging feedback in a user evaluation where TripleRank results have been exploited in a faceted browsing scenario.
TripleRank: Ranking Semantic Web Data By Tensor Decomposition
Flavio Palandri Antonelli
Flavio Palandri Antonelli
7434b82e7044f3be1901acfbee77de9d2dbdfb5a
Government of South Australia
Government of South Australia
John Colton
John Colton
Rob Vesse
Rob Vesse
66b9cdc653558d5be777d0a01d379eae6faed6bc
User and the Social Semantic Web (Collaboration, Policy, and Trust)
Kathrin Dentler
Kathrin Dentler
239d7b70f6d4401eb09e6b7a7ddd807b7aceff3e
Jon Phipps
Jon Phipps
94c14a8cf935cff461bab0bcee792a8771384de9
Hugh Glaser
Hugh Glaser
623018b35a1850179ed2903e332d96978eeb1d4f
Thomas Irmscher
Thomas Irmscher
Vit Novacek
Vit Novacek
c91b8f5b6908408264728371f7331249e3857a87
Management: Robust and scalable knowledge management and reasoning on the Web
Ken Kaneiwa
Ken Kaneiwa
a1692db18a730c0236ac62f9d1dc9f0efbd4c647
John Callahan
John Callahan
3bc25722cfdc422b549ce5d401074e189b7253cf
Juanzi Li
Juanzi Li
Sharing Ideas for Complex Problems in User Interaction
Maria Sokhn
Maria Sokhn
88b4c064a551911e21ef11580304eedee4c2690e
We propose a diagrammatic logic that is suitable for specifying ontologies. We provide a specification of a simple ontology and include examples to show how to place constraints on ontology specifications and define queries. The framework also allows the depiction of instances, multiple ontologies to be related, and reasoning about ontologies.
A Proposed Diagrammatic Logic for Ontology Specification and Visualization
A Proposed Diagrammatic Logic for Ontology Specification and Visualization
Open University
Open University
Knowledge Media Institute (KMI), The Open University
Knowledge Media Institute (KMI), The Open University
Elena Mugellini
Elena Mugellini
Recent work in explanation of entailments in ontologies has focused on justifications and their variants. While in many cases, just presenting the justification is sufficient for user understanding, and in all cases justifications are much better than nothing, we have empirically identified cases where understanding how a justification supports an entailment is inordinately difficult. Indeed there are naturally occurring justifications that people, with varying expertise in OWL, cannot understand. To address this problem, we have developed a novel conceptual framework for justification oriented proofs. Given a justification for an entailment in an ontology, intermediate inference steps, called lemmas, are automatically derived, that bridge the gap between the axioms in the justification and the entailment. The proof shows in a stepwise way how the lemmas and ultimately the entailment follow from the justification. At the heart of the framework is the notion of a ``complexity model'', which predicts how easy or difficult it is for a user to understand a justification, and is used for selecting the lemmas to insert into a proof. This poster and demo presents this framework backed by a prototype implementation.
Understanding Justifications for Entailments in OWL
Understanding Justifications for Entailments in OWL
We present a lightweight framework for processing uncertain emergent knowledge that comes from multiple resources with varying relevance. The framework is essentially RDF-compatible, but allows also for direct representation of contextual features (e.g., provenance). We support soft integration and robust querying of the represented content based on well-founded notions of aggregation, similarity and ranking. A proof-of-concept implementation is presented and evaluated within large scale knowledge-based search in life science articles.
Towards Lightweight and Robust Large Scale Emergent Knowledge Processing
Towards Lightweight and Robust Large Scale Emergent Knowledge Processing
Vulcan Inc.
Vulcan Inc.
This paper describes the alpha Urban LarKC, one of the first Urban Computing applications built with Semantic Web technologies. It is based on the LarKC platform and makes use of the publicly available data sources on the Web which refer to interesting information about a urban environment (the city of Milano in Italy).
The alpha Urban LarKC, a Semantic Urban Computing application
The alpha Urban LarKC, a Semantic Urban Computing application
Edgar Meij
Edgar Meij
7a810553a7a0f3904e466318ca7f52f757c63bc7
Yi-Dong Shen
Yi-Dong Shen
In this paper we investigate different technologies to attack the automatic solution of orchestration problems based on synthesis from declarative specifications, a semantically enriched description of the services, and a collection of services available on a testbed. In addition to our previously presented tableaux-based synthesis technology, we consider two structurally rather different approaches here: using jMosel, our tool for Monadic Second-Order Logic on Strings and the high-level programming language Golog, that internally makes use of planning techniques. As a common case study we consider the Mediation Scenario of the Semantic Web Service Challenge, which is a benchmark for process orchestration. All three synthesis solutions have been embedded in the jABC/jETI modeling framework, and used to synthesize the abstract mediator processes as well as their concrete, running (Web) service counterpart. Using the jABC as a common frame helps highlighting the essential differences and similarities. It turns out, at least at the level of complication of the considered case study, all approaches behave quite similarly, both considering the performance as well as the modeling. We believe that turning the jABC framework into experimentation platform along the lines presented here, will help understanding the application profiles of the individual synthesis solutions and technologies, answering questing like when the overhead to achieve compositionality pays of and where (heuristic) search is the technology of choice.
Synthesizing Semantic Web Service Compositions with jMosel and Golog
Synthesizing Semantic Web Service Compositions with jMosel and Golog
SIOOS: Semantically-driven Integration of Ocean Observing Systems
The diversity and heterogeneity of ocean observing systems obstructs the information flow needed to fully realise the benefits. SIOOS is a prototype for semantically-driven integration of ocean observation systems. SIOOS is built upon our Semantic Service Architecture platform, making rich use of complex ontologies and ontology-to-resource mappings to offer a flexible, semantically-driven integration environment. The SIOOS prototype draws on a federation of autonomous web and sensor observation services from the Integrated Ocean Observing System (IOOS). In this demonstration, we will use typical information management scenarios drawn from the ocean observation community to highlight major features of the SIOOS and show how these features address some of the challenges faced by the IOOS community.
SIOOS: Semantically-driven Integration of Ocean Observing Systems
Fabian Abel
Fabian Abel
Semantically-aided business process modeling
Enriching business process models with semantic annotations taken from an ontology has become a crucial necessity both in service provisioning, integration and composition, and in business processes management. In our work we represent semantically annotated business processes as part of an OWL knowledge base that formalises the business process structure, the business domain, and a set of criteria describing correct semantic annotations. In this paper we show how Semantic Web representation and reasoning techniques can be effectively applied to formalise, and automatically verify, sets of constraints on Business Process Diagrams that involve both knowledge about the domain and the process structure. We also present a tool for the automated transformation of an annotated Business Process Diagram into an OWL ontology. The use of the semantic web techniques and tool presented in the paper results in a novel support for the management of business processes in the phase of process modeling, whose feasibility and usefulness will be illustrated by means of a concrete example.
Semantically-aided business process modeling
Semantic Sensor Networks
Pete Pflugrath
Pete Pflugrath
2b8ec7ea77d64e406093b28271da4a11900a8f81
Yang Yu
Yang Yu
a64d67d8e07ae766c620ddc4b169158a194cf19d
Dragan Gasevic
Dragan Gasevic
CEFRIEL - ICT Institute Politecnico di Milano
CEFRIEL - ICT Institute Politecnico di Milano
SPoX: combining reactive Semantic Web policies and Social Semantic Data to control the behaviour of Skype
SPoX: combining reactive Semantic Web policies and Social Semantic Data to control the behaviour of Skype
In this demo paper we describe SPoX, a tool that allows to define the behaviour of Skype based on reactive Semantic Web policies. SPoX (Skype Policy Extension) enables users to define policies stating, for example, who is allowed to call and whose chat messages show up. Moreover, SPoX reacts to arbitrary events in Skype's Social Network as well, such as on-line status changes of users or the birthday of a friend. The decisions about how SPoX reacts are defined by means of Semantic Web policies that do not only consider the context of the user (such as time or on-line status) but include Social Semantic Web data into the policy reasoning process. By this means, users can state that, for instance, only people defined as friends in their FOAF profile, only friends on Twitter, or even only people they wrote a paper with are allowed to call. Further, SPoX exploits Semantic Web techniques for advanced negotiations by means of exchanging policies over the Skype application channel. This procedure allows two clients to negotiate trust based on their SPoX policies before a connection - for example a Skype call - is established.
IBM China Research Laboratory,Beijing China
IBM China Research Laboratory
IBM China Research Laboratory,Beijing China
ibm china research lab
IBM China Research Laboratory
ibm china research lab
Siegfried Handschuh
Siegfried Handschuh
Harith Alani
Harith Alani
7c2ab565d1b14088c57954680eacbe9522329763
Role of Semantic Web in Provenance Management
Reginald Ford
Reginald Ford
Luke Steller
Luke Steller
36017e63da87a5a9c07cf66a1be6d0ea39076639
Bernhard Schandl
Bernhard Schandl
da4b0c8e8174f3d55e4dd0dd4d58e82bb01f1548
Michael Kifer
Michael Kifer
Extracting Enterprise Vocabularies Using Linked Open Data
Extracting Enterprise Vocabularies Using Linked Open Data
A common vocabulary is vital to smooth business operation, yet codifying and maintaining an enterprise vocabulary is an arduous, manual task. We describe a process to automatically extract a domain specific vocabulary (terms and types) from unstructured data in the enterprise guided by term definitions in Linked Open Data (LOD). We validate our techniques by applying them to the IT (Information Technology) domain, taking 58 Gartner analyst reports and using two specific LOD sources -- DBpedia and Freebase.
University of Sheffield
University of Sheffield
Semantic Web Technologies to Improve Customer Service
In this paper, we present an approach that exploits semantic web technologies to categorize specialized text and to create hierarchical facets representing the document content. For this purpose, domain knowledge represented by a thesaurus with relevant, domain-specific terms is used to identify relevant terms. Based on dependency information between single terms provided by the thesaurus (hypernomy, hyponymy), we create hierarchical facets representing the content of the text. The algorithm is applied to a collection of service messages and shows promising results in text categorization.
Semantic Web Technologies to Improve Customer Service
The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.
What Four Million Mappings Can Tell You About Two Hundred Ontologies
What Four Million Mappings Can Tell You About Two Hundred Ontologies
Cherie H. Youn
Cherie H. Youn
Suvodeep Mazumdar
Suvodeep Mazumdar
Frank van Harmelen
Frank van Harmelen
University of Jyväskylä
University of Jyväskylä
Angela Piccini
Angela Piccini
A Decomposition-based Approach to Optimizing Conjunctive Query Answering in OWL DL
Esko Nuutila
Esko Nuutila
Fabio Ciravegna
Fabio Ciravegna
Recently, the W3C Linking Open Data effort has boosted the publication and inter-linkage of large amounts of RDF datasets on the Semantic Web. Various ontologies and knowledge bases with millions of RDF triples from Wikipedia and other sources, mostly in e-science, have been created and are publicly available. Recording provenance information of RDF triples aggregated from different heterogeneous sources is crucial in order to effectively support trust mechanisms, digital rights and privacy policies. Managing provenance becomes even more important when we consider not only explicitly stated but also implicit triples (through RDFS inference rules) in conjunction with declarative languages for querying and updating RDF graphs. In this paper we rely on colored RDF triples represented as quadruples to capture and manipulate explicit provenance information.
Coloring RDF Triples to Capture Provenance
Coloring RDF Triples to Capture Provenance
Institute of Business Intelligence & Knowledge Discovery, Guangdong University of Foreign Studies State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
Institute of Business Intelligence & Knowledge Discovery, Guangdong University of Foreign Studies State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
Enrico Motta
Enrico Motta
Jozef Stefan Institute
Jozef Stefan Institute
Service Matchmaking and Resource Retrieval in the Semantic Web
Indiana University
Indiana University
Riichiro Mizoguchi
Riichiro Mizoguchi
Optimizing Web Service Composition while Enforcing Regulations
Technische Universität Dortmund
Technische Universität Dortmund
Goal-Directed Module Extraction for Explaining OWL DL Entailments
Julian Dolby
Julian Dolby
Tania Tudorache
Tania Tudorache
Jim Hendler
Jim Hendler
James Hendler
James Hendler
Web of Data Plumbing - Lowering the Barriers to Entry
Publishing and consuming content on the Web of Data often requires considerable expertise in the underlying technologies, as the expected services to achieve this are either not packaged in a simple and accessible manner, or are simply lacking. In this poster, we address selected issues by briefly introducing the following essential Web of Data services designed to lower the entry-barrier for Web developers: (i) a multi-ping service, (ii) a meta search service, and (iii) a universal discovery service.
Web of Data Plumbing - Lowering the Barriers to Entry
Military training and testing events are highly complex affairs, potentially involving dozens of legacy systems that need to interoperate in a meaningful way. There are superficial interoperability concerns (such as two systems not sharing the same messaging formats), but also substantive problems such as different systems not sharing the same understanding of the terrain, positions of entities, and so forth. We describe our approach to facilitating such events: describe the systems and requirements in great detail using ontologies, and use automated reasoning to automatically find and help resolve problems. The complexity of our problem took us to the limits of what one can do with OWL, and we needed to introduce some innovative techniques of using and extending it. We describe our novel ways of using SWRL and discuss its limitations as well as extensions to it that we found necessary or desirable. Another innovation is our representation of hierarchical tasks in OWL, and an engine that reasons about them. Our task ontology has proved to be a very flexible and expressive framework to describe requirements on resources and their capabilities in order to achieve some purpose.
Reasoning about Resources and Hierarchical Tasks Using OWL and SWRL
Reasoning about Resources and Hierarchical Tasks Using OWL and SWRL
On Detecting High-Level Changes in RDF/S KBs
On Detecting High-Level Changes in RDF/S KBs
An increasing number of scientific communities rely on Semantic Web ontologies to share and interpret data within and across research domains. These common knowledge representation resources are usually developed and maintained manually and essentially co-evolve along with experimental evidence produced by scientists worldwide. Detecting automatically the differences between (two) versions of the same ontology in order to store or visualize their deltas is a challenging task for e-science. In this paper, we focus on languages allowing the formulation of concise and intuitive deltas, which are expressive enough to describe unambiguously any possible change and that can be effectively and efficiently detected. We propose a specific language that provably exhibits those characteristics and provide a change detection algorithm which is sound and complete with respect to the proposed language. Finally, we provide a promising experimental evaluation of our framework using real ontologies from the cultural and bioinformatics domains.
University of Maryland, Baltimore County
University of Maryland, USA
University of Maryland, Baltimore County, USA
University of Maryland
University of Maryland, Baltimore County
University of Maryland, College Park
University of Maryland
University of Maryland, USA
University of Maryland, College Park
University of Maryland, Baltimore County, USA
Sebastian Hellmann
Sebastian Hellmann
Koninklijke Bibliotheek, den Haag
Koninklijke Bibliotheek, den Haag
LMI-CliCKE: The Climate Change Knowledge Engine (CliCKE) using Semantic MediaWiki
LMI is a not-for-profit research organization committed to helping government leaders and managers reach decisions that make a difference on issues of national importance. Climate change will be one of the defining issues of this century. It has moved from the province of specialists in environmental issues to one of concern for all government leaders. The International Panel on Climate Change (IPCC), the U.S. Global Change Research Program, and individual U.S. agencies have produced important studies of climate change. However, the IPCC Fourth Assessment Report (AR4) alone is over 2600 pages. Within these pages, LMI identified 2693 findings that include specific defined levels of uncertainty. The findings from the IPCC have been so thoroughly demonstrated by the scientific method that it would be a failure of responsibility to ignore them. They form the basis for the LMI Climate Change Knowledge Engine (LMI-CliCKE) and A Federal Leaders Guide to Climate Change a LMI published book written to assist leaders of federal agencies in addressing the challenges associated with climate change. Thorough analysis of the 2693 findings led LMI to develop a semantically driven, wiki-based web site that allows users to explore, analyze, evaluate, and compare scientific findings related to climate change. The LMI Climate Change Knowledge Engine (LMI-CliCKE) gives full text and categorical details of the findings and relationships among them. As an initial prototype the LMI climate team has selected and categorized all findings from the AR4.
LMI-CliCKE: The Climate Change Knowledge Engine (CliCKE) using Semantic MediaWiki
Analysis of a Real Online Social Network using Semantic Web Frameworks
Analysis of a Real Online Social Network using Semantic Web Frameworks
Social Network Analysis (SNA) provides graph algorithms to characterize the structure of social networks, strategic positions in these networks, specific sub-networks and decompositions of people and activities. Online social platforms like Facebook form huge social networks, enabling people to connect, interact and share their online activities across several social applications. We extended SNA operators using semantic web frameworks to include the semantics of these graph-based representations when analyzing such social networks and to deal with the diversity of their relations and interactions. We present here the results of this approach when it was used to analyze a real social network with 60,000 users connecting, interacting and sharing content.
Coloring RDF Triples to Capture Provenance
University of Southern California
University of Southern California
Wolfram Wöß
Wolfram Wöß
Seppo Torma
Seppo Torma
Aldo Gangemi
Aldo
Gangemi
Georgi Kobilarov
Georgi Kobilarov
Viewpoint Management for Multi-Perspective issues of Ontologies
Viewpoint Management for Multi-Perspective issues of Ontologies
This paper discusses semantic technologies for multi-perspective issues of ontologies based on ontological viewpoint management. We developed two technologies and implement them in environmental and medical domain. The first one is conceptual map generation tool which allows the users to explore an ontology according to their own perspectives and visualizes them in a user-friendly form, i.e. conceptual map. The other is on-demand reorganization of is-a hierarchy from an ontology. They contribute to integrated understanding of ontologies and a solution of multi-perspective issues of ontologies.
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
Krishnaprasad Thirunarayan
Krishnaprasad Thirunarayan
Lushan Han
Lushan Han
03dbca6624c73f2355f702a25b457ce3085ffd12
Executing SPARQL Queries over the Web of Linked Data
Todd Schneider
Todd Schneider
9459fc24d625ef8be919943b80b7285b43a03724
487e1c1b906a7e301cc510f5a2213b32c4af6c4c
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
RDF is an increasingly important paradigm for the representation of information on the Web. As RDF databases increase in size to approach tens of millions of triples, and as sophisticated graph matching queries expressible in languages like SPARQL become increasingly important, scalability becomes an issue. To date, there is no graph-based indexing method for RDF data where the index was designed in a way that makes it disk-resident. There is therefore a growing need for indexes that can operate efficiently when the index itself resides on disk. In this paper, we first propose the DOGMA index for fast subgraph matching on disk and then develop a basic algorithm to answer queries over this index. This algorithm is then significantly sped up via an optimized algorithm that uses efficient (but correct) pruning strategies when combined with two different extensions of the index. We have implemented a preliminary system and tested it against four existing RDF database systems developed by others. Our experiments show that our algorithm performs very well compared to these systems, with orders of magnitude improvements for complex graph queries.
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
SAP Research
SAP Research
SAP AG, Research
SAP AG, Research
Darko Anicic
Darko Anicic
Ulf Leser
Ulf Leser
Jan Pieper
Jan Pieper
Padmashree Ravindra
Padmashree Ravindra
Functions over RDF Language Elements
Daniela Petrelli
Daniela Petrelli
6403f6d773563e19a6cc423d9a31de2d7128518f
Diana Maynard
Diana Maynard
Goeffrey Squire
Goeffrey Squire
Panagiotis Pediaditis
Panagiotis Pediaditis
Vocabulary Matching for Book Indexing Suggestion in Linked Libraries -- A Prototype Implementation & Evaluation
Vocabulary Matching for Book Indexing Suggestion in Linked Libraries -- A Prototype Implementation & Evaluation
In this paper, we report on a technology-transfer effort on using the Semantic Web (SW) technologies, esp. ontology matching, for solving a real-life library problem: book subject indexing. Our purpose is to streamline one library's book description process by suggesting new subjects based on descriptions created by other institutions, even when the vocabularies used are different. The case at hand concerns the National Library of the Netherlands (KB) and the network of Dutch local public libraries. We present a prototype subject suggestion tool, which is directly connected to the KB production cataloguing environment. We also report on the results of a user study and evaluation to assess the feasibility of exploiting state-of-the art techniques in such a real-life application. Our prototype demonstrates that SW components can be seamlessly plugged into the KB production environment, which potentially brings a higher level of flexibility and openness to networked Cultural Heritage (CH) institutions. Technical hurdles can be tackled and the suggested subjects are often relevant, opening up exciting new perspectives on the daily work of the KB. However, the general performance level should be made higher to warrant seamless embedding in the production environmentnotably by considering more contextual metadata for the suggestion process.
Wright State University
Wright State University
Kno.e.sis Research Center, Wright State University
Kno.e.sis Research Center, Wright State University
Kouji Kozaki
Kouji Kozaki
c493b1b07fa4dacfdda46edc55ebae341758972f
Dimitris Kotzinos
Dimitris Kotzinos
Jiao Tao
Jiao Tao
47b232e485805df0e86a6d284279494c7eaa7f1e
Rudy Depena
Rudy Depena
Aditya Kalyanpur
Aditya Kalyanpur
b1fe362f3eabc1dac5c9829fb53fd2de0d556f5b
Héctor Pérez-Urbina
Héctor Pérez-Urbina
9eea7cd12ff082cbaacd63de008b470c38d92cd7
Integrity Constraints in OWL
In many data-centric applications, it is desirable to use OWL as an expressive schema language with which one expresses constraints that must be satisfied by instance data. However, specific aspects of OWL's standard semantics---i.e., the Open World Assumption (OWA) and the absence of Unique Name Assumption (UNA)---make it difficult to use OWL in this way. In this paper, we present an Integrity Constraint (IC) semantics for OWL axioms, show that IC validation can be reduced to query answering, and present our preliminary results with a prototype implementation using Pellet.
Integrity Constraints in OWL
Isabel Cruz
Isabel Cruz
Mari Carmen Suárez-Figueroa
Mari Carmen Suárez-Figueroa
f9494ff417ea83c9d0fb4ddb0f4bb8b2c76b0b22
Michael Hausenblas
Michael Hausenblas
327b61f3721afbd39dceadf5e5b4fc2d79d5dcc8
Sergej Sizov
Sergej Sizov
Sheila A. McIlraith
Sheila A. McIlraith
The University of Manchester
The University of Manchester
Takeru Hirota
Takeru Hirota
Ian Horrocks
Ian Horrocks
ISTC-CNR
ISTC-CNR
Ian Oliver
Ian Oliver
Rafael Peñaloza
Rafael Peñaloza
1b374d6bfb4ef9ace30a9166758fa5270dfec8bd
GNOWSYS-mode in Emacs for collaborative construction of knowledge networks in plain text
GNOWSYS-mode in Emacs for collaborative construction of knowledge networks in plain text
GNOWSYS-mode is an Emacs extension package for knowledge networking and ontology management using GNOWSYS (Gnowledge Networking and Organizing SYStem) as a server. The demonstration shows how to collaboratively build ontologies and semantic network in an intuitive plain text without any of the RDF notations, though importing and exporting in RDF is possible.
Freie Universität Berlin
Freie Universität Berlin
Christopher G. Chute
Christopher G. Chute
Produce and Consume Linked Data with Drupal!
Currently a large number of Web sites are driven by Content Management Systems (CMS) which manage textual and multimedia content but also - inherently - carry valuable information about a site's structure and content model. Exposing this structured information to the Web of Data has so far required considerable expertise in RDF and OWL modelling and additional programming effort. In this paper we tackle one of the most popular CMS: Drupal. We enable site administrators to export their site content model and data to the Web of Data without requiring extensive knowledge on Semantic Web technologies. Our modules create RDFa annotations and -- optionally -- a SPARQL endpoint for any Drupal site out of the box. Likewise, we add the means to map the site data to existing ontologies on the Web with a search interface to find commonly used ontology terms. We also allow a Drupal site administrator to include existing RDF data from remote SPARQL endpoints on the Web in the site. When brought together, these features allow networked RDF Drupal sites that reuse and enrich Linked Data. We finally discuss the adoption of our modules and report on a use case in the biomedical field and the current status of its deployment.
Produce and Consume Linked Data with Drupal!
GoodRelations Tools and Applications
The adoption of ontologies for the Web of Data can be increased by tools that help populating respective knowledge bases from legacy content, e.g. existing databases, business applications, or proprietary data formats. In this demo and poster, we show the results from our efforts of developing a suite of open-source tools for creating e-commerce descriptions for the Web of Data based on the GoodRelations ontology. Also, we demonstrate how RDF/XML data can be (1) submitted to Yahoo SearchMonkey via the RDF2DataRSS conversion tool, (2) inspected using the SearchMonkey Meta-Data Inspector, and (3) how common data inconsistencies can be spotted with the GoodRelations Validator.
GoodRelations Tools and Applications
Universtität Karlsruhe
University of Karlsruhe
Universtität Karlsruhe
Universität Karlsruhe
Universität Karlsruhe
University of Karlsruhe
University of Toronto
University of Toronto
Jesse Weaver
Jesse Weaver
2cfc52eeec64d31ef5affc87402c6c7188ed4a7c
Juan Sequeda
Juan Sequeda
e36a6c5f10bf558670ec81424012f651b25e23a4
Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes
Aman Goel
Aman Goel
Dimitrios Skoutas
Dimitrios Skoutas
e742494f7e4e8ee20bb07156d3a37f182282fa6c
Kavitha Srinivas
Kavitha Srinivas
Scalable Semantic Web Knowledge Base Systems
ISLA, University of Amsterdam
University of Amsterdam
ISLA, University of Amsterdam
University of Amsterdam
Querying and Semantically Integrating Spreadsheet Collections with XLWrap-Server - Use Cases and Mapping Design Patterns
In this demo we will present XLWrap-Server, which is a wrapper for collections of spreadsheets providing a SPARQL and Linked Data interface similar to D2R-Server. It is based on XLWrap, a novel approach for generating RDF graphs of arbitrary complexity from spreadsheets with different layouts. To our best knowledge, XLWrap is the first spreadsheet wrapper, supporting cross tables and tables where data is not aligned in rows. It features a full expression algebra based on the syntax of OpenOffice Calc which can be easily extended by users and it supports Microsoft Excel, Open Document, and large CSV spreadsheets. XLWrap-Server can be used to integrate information from a collection of spreadsheets. We will show several use-cases and mapping design patterns in our demonstration.
Querying and Semantically Integrating Spreadsheet Collections with XLWrap-Server - Use Cases and Mapping Design Patterns
MIT
MIT
Gang Hu
Gang Hu
XLWrap - Querying and Integrating Arbitrary Spreadsheets with SPARQL
In this paper a novel approach is presented for generating RDF graphs of arbitrary complexity from various spreadsheet layouts. Currently, none of the available spreadsheet-to-RDF wrappers supports cross tables and tables where data is not aligned in rows. Similar to RDF123, XLWrap is based on template graphs where fragments of triples can be mapped to specific cells of a spreadsheet. Additionally, it features a full expression algebra based on the syntax of OpenOffice Calc and various shift operations, which can be used to repeat similar mappings in order to wrap cross tables including multiple sheets and spreadsheet files. The set of available expression functions includes most of the native functions of OpenOffice Calc and can be easily extended by users of XLWrap. Additionally, XLWrap is able to execute SPARQL queries, and since it is possible to define multiple virtual class extents in a mapping specification, it can be used to integrate information from multiple spreadsheets. XLWrap supports a special identity concept which allows to link anonymous resources (blank nodes) - which may originate from different spreadsheets - in the target graph.
XLWrap - Querying and Integrating Arbitrary Spreadsheets with SPARQL
Cui Tao
Cui Tao
067987f78a4d22130513a4745cafdb334f03934f
Digital Enterprise Research Institute
DERI, NUI Galway, Ireland
Digital Enterprise Research Institute, National University of Ireland Galway
Digital Enterprise Research Institute, NUI Galway
DERI, NUI Galway, Ireland
DERI Galway, National University of Ireland, Galway
National University of Ireland, Galway
School of Engineering and Informatics and DERI, National University of Ireland, Galway
Digital Enterprise Research Institute, National University of Ireland, Galway, Ireland
Digital Enterprise Research Institute, National University of Ireland, Galway, Ireland
Digital Enterprise Research Institute
Digital Enterprise Research Institute, National University of Ireland, Galway
DERI, National University of Ireland, Galway
School of Engineering and Informatics and DERI, National University of Ireland, Galway
National University of Ireland, Galway
DERI Galway, National University of Ireland, Galway
Digital Enterprise Research Institute, NUI Galway
DERI, National University of Ireland, Galway
Digital Enterprise Research Institute, National University of Ireland Galway
Digital Enterprise Research Institute, National University of Ireland, Galway
Progeny Systems
Progeny Systems
Grigoris Antoniou
Grigoris Antoniou
Jose Luis Ambite
Jose Luis Ambite
Jie Tang
Jie Tang
Andreas Radinger
Andreas Radinger
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Gnowledge Lab, Homi Bhabha Centre, Tata Institute of Fundamental Research
Gnowledge Lab, Homi Bhabha Centre, Tata Institute of Fundamental Research
An important issue for the Semantic Web is how to combine open-world ontology languages with closed-world (non-monotonic) rule paradigms. Several proposals for hybrid languages allow concepts to be simultaneously defined by an ontology and rules, where rules may refer to concepts in the ontology and the ontology may also refer to predicates defined by the rules. Hybrid MKNF knowledge bases are one such proposal, for which both a stable and a well-founded semantics have been defined. The definition of Hybrid MKNF knowledge bases is parametric on the ontology language, in the sense that non-monotonic rules can extend any decidable ontology language. In this paper we define a query-driven procedure for Hybrid MKNF knowledge bases that is sound with respect to the original stable model-based semantics, and is correct with respect to the well-founded semantics. This procedure is able to answer conjunctive queries, and is parametric on an inference engine for reasoning in the ontology language. Our procedure is based on an extension of a tabled rule evaluation to capture reasoning within an ontology by modeling it as an interaction with an external oracle and, with some assumptions on the complexity of the oracle compared to the complexity of the ontology language, maintains the data complexity of the well-founded semantics for hybrid MKNF knowledge bases.
Queries to Hybrid MKNF Knowledge Bases through Oracular Tabling
Queries to Hybrid MKNF Knowledge Bases through Oracular Tabling
Actively Learning Ontology Matching via User Interaction
Ontology matching plays a key role for semantic interoperability. Many methods have been proposed for automatically finding the alignment between heterogeneous ontologies. However, in many real-world applications, finding the alignment in a completely automatic way is highly infeasible. Ideally, an ontology matching system would have an interactive interface to allow users to provide feedbacks to guide the automatic algorithm. Fundamentally, we need answer the following questions: how can a system perform an efficiently interactive process with the user? How many interactions are sufficient for finding a more accurate matching? To address these questions, we propose an active learning framework for ontology matching, which tries to find the most informative candidate matches to query the user. The users feedbacks are used to: 1) correct the mistake matching and 2) propagate the supervise information to help the entire matching process. Three measures are proposed to estimate the confidence of each matching candidate. A correct propagation algorithm is further proposed to maximize the spread of the users guidance. Experimental results on several public data sets show that the proposed approach can significantly improve the matching accuracy (+8.0% better than the baseline methods).
Actively Learning Ontology Matching via User Interaction
Max Hirschel
Max Hirschel
Martin Gaedke
Martin Gaedke
Andreas Harth
Andreas Harth
ac570c08660f09f436e069d1d664a5654648a95e
Improved Semantic Graphs with Word Sense Disambiguation
Improved Semantic Graphs with Word Sense Disambiguation
Semantic graphs can be seen as a way of representing and visualizing textual information in more structured, RDF-like graphs. The reader thus obtains an overview of the content, without having to read through the text. In building a compact semantic graph, an important step is grouping similar concepts under the same label and connecting them to external repositories. This is achieved through disambiguating word senses, in our case by assigning the sense to a concept given its context. The paper presents an unsupervised, knowledge based word sense disambiguating algorithm for linking semantic graph nodes to the WordNet vocabulary. The algorithm is integrated in the semantic graph generation pipeline, improving the semantic graph readability and conciseness. Experimental evaluation of the proposed disambiguation algorithm shows that it gives good results.
Institute of Computer Science (ICS), FORTH and TEI of Serres
Institute of Computer Science (ICS), FORTH and TEI of Serres
Forgetting is an important tool for reducing ontologies by eliminating some concepts and roles while preserving sound and complete reasoning. Attempts have previously been made to address the problem of forgetting in relatively simple description logics (DLs) such as DL-Lite and extended EL. The ontologies used in these attempts were mostly restricted to TBoxes rather than general knowledge bases (KBs). However, the issue of forgetting for general KBs in more expressive description logics, such as ALC and OWL DL, is largely unexplored. In particular, the problem of characterizing and computing forgetting for such logics is still open. In this paper, we first define semantic forgetting about concepts and roles in ALC ontologies and state several important properties of forgetting in this setting. We then define the result of forgetting for concept descriptions in ALC, state the properties of forgetting for concept descriptions, and present algorithms for computing the result of forgetting for concept descriptions. Unlike the case of DL-Lite, the result of forgetting for an ALC ontology does not exist in general, even for the special case of concept forgetting. This makes the problem of how to compute forgetting in ALC more challenging. We address this problem by defining a series of approximations to the result of forgetting for ALC ontologies and studying their properties and their application to reasoning tasks. We use the algorithms for computing forgetting for concept descriptions to compute these approximations. Our algorithms for computing approximations can be embedded into an ontology editor to enhance its ability to manage and reason in (large) ontologies.
Concept and Role Forgetting in ALC Ontologies
Concept and Role Forgetting in ALC Ontologies
Daniel Elenius
Daniel Elenius
74aaa5dfcb84bf01edf8a8506ad253737abbdcff
Daniel Meyer
Daniel Meyer
Jing Mei
Jing Mei
IBM Watson Research Center
IBM Watson Research Center
Ontology Dynamics
Amir Ghazvinian
Amir Ghazvinian
fd478b7bc7d746f6fb582d5ffb49b7d57307436c
Mike Dean
Mike Dean
6006a914f7d645142fadf08d9a4e33aee98416d0
Kewi, I3S, university of Nice, France
Kewi, I3S, university of Nice, France
FON-School of Business Administration, Belgrade, Serbia
FON-School of Business Administration, Belgrade, Serbia
Anna Lisa Gentile
Anna Lisa
Gentile
Kerry Taylor
Kerry Taylor
Demonstration: Wireless Access Network Selection Enabled by Semantic Technologies
Service oriented access in a multi-application, multi-access network environment is faced with the problem of cross-layer interoperability among technologies. In this demo, we present a knowledge base (KB) which contains local (user terminal specific) knowledge that enables pro-active network selection by translating technology specific parameters to higher-level, more abstract parameters. We implemented a prototype which makes use of semantic technology (namely ResearchCyc) for creating the elements of the KB and uses reasoning to determine the best access network. The system implements technology-specific parameter mapping according to the IEEE 802.21 draft standard recommendation.
Demonstration: Wireless Access Network Selection Enabled by Semantic Technologies
Semantic formalisms represent content in a uniform way according to ontologies. This enables manipulation and reasoning via automated means (e.g. Semantic Web services), but limits the users ability to explore the semantic data from a point of view that originates from knowledge representation motivations. We show how, for user consumption, a visualization of semantic data according to some easily graspable dimensions (e.g. space and time) provides effective sense-making of data. In this paper, we look holistically at the interaction between users and semantic data, and propose multiple visualization strategies and dynamic filters to support the exploration of semantic-rich data. We discuss a user evaluation and how interaction challenges could be overcome to create an effective user-centred framework for the visualization and manipulation of semantic data. The approach has been implemented and evaluated on a real company archive.
Multi Visualisation and Dynamic Query for Effective Exploration of Semantic Data
Multi Visualisation and Dynamic Query for Effective Exploration of Semantic Data
University of Bristol
University of Bristol
Concept and Role Forgetting in ALC Ontologies
Queries to Hybrid MKNF Knowledge Bases through Oracular Tabling
Semantic-Powered Research Profiling
Research profiling is a widely-adopted method to monitor research development and rank research performance. This paper describes a novel infrastructure to generate semantic-powered research profiling for research fields, organizations and individuals. It crawls related websites and news feeds, extracts research terms, research objects and relations from them and uses the proposed Research Ontology to model them into RDF triples to facilitate semantic queries and semantic mining on burst detection, hot topic detection, dynamics of research, and relation mining. The authors implement a research profiling experiment in Artificial Intelligence area to show the effectiveness of the research profiling based on semantic mining.
Semantic-Powered Research Profiling
David Karger
David Karger
Actively Learning Ontology Matching via User Interaction
The Data-gov Wiki: A Semantic Web Portal for Linked Government Data
The Data-gov Wiki: A Semantic Web Portal for Linked Government Data
The Data-gov Wiki is the delivery site for a project where we investigate the role of linked data in producing, processing and utilizing the government datasets found on data.gov. The project has generated over 2 billion triples from government data and a few interesting applications covering data access, visualization, integration, linking and analysis.
Terry Payne
Terry Payne
Peter Haase
Peter Haase
Kenneth Baclawski
Kenneth Baclawski
Juergen Umbrich
Juergen Umbrich
Nigel R Shadbolt
Nigel R Shadbolt
Lifting events in RDF from interactions with annotated Web pages
In this demo we show the current state of our client-side rule engine for the Web. The engine is an implementation for creating and processing semantic events from interaction with Web pages which opens possibilities to build event-driven applications for the (Semantic) Web. Events, simple or complex, are models for things that happen e.g., when a user interacts with a Web page. Events are consumed in some meaningful way e.g., for monitoring reasons or to trigger actions such as responses. In order for receiving parties to understand events, i.e. comprehend what has led to an event, we demonstrate a general event schema using RDFS.
Lifting events in RDF from interactions with annotated Web pages
Donald Hillman
Donald Hillman
Andreas Wechselberger
Andreas Wechselberger
What Four Million Mappings Can Tell You About Two Hundred Ontologies
Matthew Rowe
Matthew Rowe
bd2cda94c756832460fd7c8f6de5c3d2525bbdba
Shengping Liu
Shengping Liu
0e121ae8674d91b1a7c10dfe7fe8854a819997d2
Tripcel: Exploring RDF Graphs using the Spreadsheet Metaphor
Tripcel: Exploring RDF Graphs using the Spreadsheet Metaphor
Spreadsheet tools are often used in business and private scenarios in order to collect and store data, and to explore and analyze these data by executing functions and aggregations. They allow users to incrementally compose calculations by filling cells with formulas that are evaluated against data in the sheet, whereas expressions can be nested via cell references. In this paper we present Tripcel, a tool that applies the spreadsheet concept to RDF. It allows users to formulate expressions over the contents of an RDF graph, to arrange these expressions in a grid, and to interactively inspect their evaluation results. Thus it can be used to perform analysis tasks over large data sets within an understandable and familiar interface.
Giovambattista Ianni
Giovambattista Ianni
Marco Rospocher
Marco Rospocher
Nagarjuna G
Nagarjuna G
c8639c719aa81215be91641331613cfc3aa49d2b
Lee Feigenbaum
Lee Feigenbaum
Guillaume Ereteo
Guillaume Ereteo
Leadsto is a prototypical Semantic Portal for collaboratively describing statements of the form x leads to y (e.g. accident leads to traffic jam). Existing elements of statements (precedents, antecedents) can be linked with each other, and completely new elements can be created. Individual statements can be created and the set of stored statements further extended and developed collaboratively on the Web by humans; in addition, automated approaches for extracting further statements from any web page are employed. The constantly growing net-like structure can be searched and navigated. The major benefit of the system is to automatically discover and make available causal chains of the form x leads to y , y leads to z , etc. (as well as the reverse direction). In this way, not yet known facts as well as their provenance can be collaboratively discovered by the wisdom of a crowd.
Leadsto - Collaboratively Constructing and Discovering Causal Chains
Leadsto - Collaboratively Constructing and Discovering Causal Chains
Analysis of a Real Online Social Network using Semantic Web Frameworks
Diane Hillmann
Diane Hillmann
Autonomous RDF Replication on Mobile Devices
Autonomous RDF Replication on Mobile Devices
Mobile applications are of increasing interest for research and industry. The widespread use and improved capabilities of portable devices enable the deployment of sophisticated and powerful applications that provide the user with services at any time and location. When such applications are built on top of Linked Data, permanent network connectivity is required, which is often not available or expensive to establish. Hence we propose a framework that uses RDF-based context descriptions to selectively and proactively replicate data to mobile devices. These replicas can be used when no network connection can be established, thus making mobile applications and users more autonomous and stable.
On Detecting High-Level Changes in RDF/S KBs
Laura Hollink
Laura Hollink
Ying Ding
Ying Ding
Application integration can be carried out on three different levels: the data source level, the business logic level, and the user interface level. With ontologies-based integration on the data source level dating back to the 1990s and semantic web services for integrating on the business logic level coming of age, it is time for the next logical step: employing ontologies for integration on the user interface level. Such an approach will improve both the development times and the usability of integrated applications. In this poster, we present an approach employing ontologies for integrating applications on the user interface level.
A Framework for Ontologies-based User Interface Integration
A Framework for Ontologies-based User Interface Integration
Matthias Broecheler
Matthias Broecheler
Stephane Corlosquet
Stephane Corlosquet
ec4755462cdda21224fa63f27c5553d1cc875891
Conjunctive query answering for EL++ ontologies has recently drawn much attention, as the Description Logic EL++ captures the expressivity of many large ontologies in the biomedical domain and is the foundation for the OWL 2 EL profile. In this paper, we propose a practical approach for conjunctive query answering in a fragment of EL++, namely acyclic EL+, that supports role inclusions. This approach can be implemented with low cost by leveraging any existing relational database management system to do the ABox data completion and query answering. We conducted a preliminary experiment to evaluate our approach using a large clinical data set and show our approach is practical.
A Practical Approach for Scalable Conjunctive Query Answering on Acyclic EL+ Knowledge Base
A Practical Approach for Scalable Conjunctive Query Answering on Acyclic EL+ Knowledge Base
Roland Stühmer
Roland Stühmer
6fe24a977630ab673af1955194b4a8475e8d9441
University of Karlsruhe Southeast University
University of Karlsruhe Southeast University
Mark Mattern
Mark Mattern
Ultrawrap: Using SQL Views for RDB2RDF
Ultrawrap: Using SQL Views for RDB2RDF
Ultrawrap is an automatic wrapping system that synthesizes an OWL ontology from the databases SQL schema and provides SPARQL query services for legacy relational databases. The system intentionally defines triples by using SQL view statements. The benefits of this organization include, the virtualization of the triple table assures real-time consistency between relational and semantic accesses to the database and the existing SQL optimizer implements the most challenging aspects of rewriting SPARQL to equivalent queries on the relational representation of the data. Initial experiments are auspicious.
Delia Rusu
Delia Rusu
4181530957d92d561984536783271cf2683095cb
Sang-goo Lee
Sang-goo Lee
Daniel Krause
Daniel Krause
1d2bb019561f56651b7312c43bb327300f61057d
Policy Aware Content Reuse on the Web
The Web allows users to share their work very effectively leading to the rapid re-use and remixing of content on the Web including text, images, and videos. Scientific research data, social networks, blogs, photo sharing sites and other such applications known collectively as the Social Web have lots of increasingly complex information. Such information from several Web pages can be very easily aggregated, mashed up and presented in other Web pages. Content generation of this nature inevitably leads to many copyright and license violations, motivating research into effective methods to detect and prevent such violations. This is supported by an experiment on Creative Commons (CC) attribution license violations from samples of Web sites that had at least one embedded Flickr image, which revealed that the attribution license violation rate of Flickr images on the Web is around 70-90%. Our primary objective is to enable users to do the right thing and comply with CC licenses associated with Web media, instead of preventing them from doing the wrong thing or detecting violations of these licenses. As a solution, we have implemented two applications: (1) Attribution License Violations Validator, which can be used to validate users' derived work against attribution licenses of reused media and, (2) Semantic Clipboard, which provides license awareness of Web media and enables users to copy them along with the appropriate license metadata.
Policy Aware Content Reuse on the Web
Tiziana Margaria
Tiziana Margaria
Klaas Dellschaft
Klaas Dellschaft
eae2717871287ee074d3e89a2ea58c2ee0026b8b
This demo shows the integration of spatial and semantic reasoning for the recognition of ship behavior. We recognize abstract behavior such as "ferry trip" and derive that the ship showing this behavior is a "ferry". We accomplish this by abstracting over low-level ship trajectory data and applying Prolog rules that express properties of ship behavior. These rules make use of the GeoNames ontology and a spatial indexing package for SWI-Prolog, which is available as open source software.
Spatial and Semantic Reasoning to Recognize Ship Behavior
Spatial and Semantic Reasoning to Recognize Ship Behavior
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
Semantic Web Reasoning by Swarm Intelligence
Semantic Web Reasoning by Swarm Intelligence
Semantic Web reasoning systems are confronted with the task to process growing amounts of distributed, dynamic resources. We propose a novel way of approaching the challenge by RDF graph traversal, exploiting the advantages of Swarm Intelligence. Our nature-inspired methodology is realised by self-organising swarms of autonomous, light-weight entities that traverse RDF graphs by following paths, aiming to instantiate pattern-based inference rules.
Parallel Materialization of the Finite RDFS Closure for Hundreds of Millions of Triples
In this paper, we consider the problem of materializing the complete finite RDFS closure in a scalable manner; this includes those parts of the RDFS closure that are often ignored such as literal generalization and container membership properties. We point out characteristics of RDFS that allow us to derive an embarrassingly parallel algorithm for producing said closure, and we evaluate our C/MPI implementation of the algorithm on a cluster with 128 cores using different-size subsets of the LUBM 10,000-university data set. We show that the time to produce inferences scales linearly with the number of processes, evaluating this behavior on up to hundreds of millions of triples. We also show the number of inferences produced for different subsets of LUBM10k. To the best of our knowledge, our work is the first to provide RDFS inferencing on such large data sets in such low times. Finally, we discuss future work in terms of promising applications of this approach including OWL2RL rules, MapReduce implementations, and massive scaling on supercomputers.
Parallel Materialization of the Finite RDFS Closure for Hundreds of Millions of Triples
Nicholas Gibbins
Nicholas Gibbins
Scalable Distributed Reasoning using MapReduce
We address the problem of scalable distributed reasoning, proposing a technique for materialising the closure of an RDF graph based on MapReduce. We have implemented our approach on top of Hadoop and deployed it on a compute cluster of up to 64 commodity machines. We show that a naive implementation on top of MapReduce is straightforward but performs badly and we present several non-trivial optimisations. Our algorithm is scalable and allows us to compute the RDFS closure of 865M triples from the Web (producing 30B triples) in less than two hours, faster than any other published approach.
Scalable Distributed Reasoning using MapReduce
FZI Forschungszentrum Informatik
FZI Research Center for Information Technology
FZI Forschungszentrum Informatik
FZI Research Center for Information Technology
We present iSMART, a system for intelligent Semantic MedicAl Record reTrival. Health Level 7 Clinical Document Architecture (CDA)[4], a standard based on XML, is well recognized for the representation and exchange of medical records. In CDAs, medical ontologies/terminologies, e.g. SNOMED CT[2], are used to specify the semantic meaning of clinical statements. To better use the structure and semantic information in CDAs for a more effective search, we propose and implement the iSMART system. Firstly, we design and implement an XML-to-RDF convertor to extract RDF statements from medical records using declarative mapping. Then, we design a reasoner to infer additional information by integrating the knowledge from the domain ontologies based on the extracted RDF statements. Finally, we index the inferred set of RDF statements and provide the semantic search on them. A demonstration video is available online[1].
iSMART : intelligent Semantic MedicAl Record reTrival
iSMART : intelligent Semantic MedicAl Record reTrival
Kay-Uwe Schmidt
Kay-Uwe Schmidt
Johann-Christoph Freytag
Johann-Christoph Freytag
Tim Berners-Lee
Tim Berners-Lee
Jobe Microsystems, University of Guelph
Jobe Microsystems, University of Guelph
Sinan Sen
Sinan Sen
Carolina Fortuna
Carolina Fortuna
b9a2be64973345007bd6d6b2e6ebf6b4a67808e8
LifeLogOn: Log on to Your Lifelog Ontology!
LifeLogOn: Log on to Your Lifelog Ontology!
LifeLogOn is a system that enables users to easily and rapidly convert heterogeneous relational log data into instance-level integrated log ontology without requiring understanding any ontology languages. It also enables visualizing the created log ontology and allows users to navigate entities and events in the ontology by following their semantic relationships. This demo shows that integration of logs from many different sources can be practical starting point of realizing life logging which can support users memory and future intelligent services.
Na Hong
Na Hong
Context and Domain Knowledge Enhanced Entity Spotting in Informal Text
This paper explores the application of restricted relationship graphs (RDF) and statistical NLP techniques to improve named entity annotation in challenging Informal English domains. We validate our approach using on-line forums discussing popular music. Named entity annotation is particularly difficult in this domain because it is characterized by a large number of ambiguous entities, such as the Madonna album Music or Lilly Allens pop hit Smile. We evaluate improvements in annotation accuracy that can be obtained by restricting the set of possible entities using real-world constraints. We find that constrained domain entity extraction raises the annotation accuracy significantly, making an infeasible task practical. We then show that we can further improve annotation accuracy by over 50% by applying SVM based NLP systems trained on word-usages in this domain.
Context and Domain Knowledge Enhanced Entity Spotting in Informal Text
Institute of Computer Science (ICS), FORTH
Institute of Computer Science (ICS), FORTH
University of Vienna
University of Vienna
Clark & Parsia, LLC
Clark & Parsia, LLC
Radhika Sridhar
Radhika Sridhar
Omar Abou Khaled
Omar Abou Khaled
Sheng Ping Liu
Sheng Ping Liu
As the Linked Data initiatives and Web of Data become more widespread, sites that process and re-present the published data are growing in size and number. One challenge is to ensure that such sites do not themselves fall into the trap of failing to publish their new knowledge in a readily available manner. Not only should the work of such sites be re-published for Linked Data users, but it should also be accessible to site builders who have not yet embraced the Semantic Web. This paper presents the work that has been done with the RKBExplorer system to support this task, and describes examples of how it is used.
RKBPlatform: Opening up Services in the Web of Data
RKBPlatform: Opening up Services in the Web of Data
Yannis Theoharis
Yannis Theoharis
Task Oriented Evaluation of Module Extraction Techniques
Wladimir Krasnov
Wladimir Krasnov
Universität Potsdam
Universität Potsdam
Semantric Ltd.
Semantric Ltd.
Bogdan Ivan
Bogdan Ivan
Zhe Wang
Zhe Wang
98f18d25daaf88121bf64a6ee85586b2787b0c2c
Seoul National University
Seoul National University
Universität Leipzig
Universität Leipzig
Investigating the Semantic Gap through Query Log Analysis
Investigating the Semantic Gap through Query Log Analysis
Significant efforts have focused in the past years on bringing large amounts of metadata online and the success of these efforts can be seen by the impressive number of web sites exposing data in RDFa or RDF/XML. However, little is known about the extent to which this data fits the needs of ordinary web users with everyday information needs. In this paper we study what we perceive as the semantic gap between the supply of data on the Semantic Web and the needs of web users as expressed in the queries submitted to a major Web search engine. We perform our analysis on both the level of instances and ontologies. First, we first look at how much data is actually relevant to Web queries and what kind of data is it. Second, we provide a generic method to extract the attributes that Web users are searching for regarding particular classes of entities. This method allows to contrast class definitions found in Semantic Web vocabularies with the attributes of objects that users are interested in. Our findings are crucial to measuring the potential of semantic search, but also speak to the state of the Semantic Web in general.
Cambridge Semantics
Cambridge Semantics
Dave Grosvenor
Dave Grosvenor
478165da6b5ca32f7dca5ff64134afad567cd328
Hewlett Packard Labs
Hewlett Packard Labs
Guilin Qi
Guilin Qi
7618a76dbd015fd744d2a2d5ef46642e65764490
Meena Nagarajan
Meena Nagarajan
Philip Groth
Philip Groth
Manfred Hauswirth
Manfred Hauswirth
Christian Battista
Christian Battista
b88e9d85708ea0686db95204aae81b28107567ea
Daniel Gruhl
Daniel Gruhl
Maria Maleshkova
Maria Maleshkova
ca3affb919025dc0ff537e91f30d1aad61c53f2b
Talis
Talis
University of Illinois at Chicago UIC
University of Illinois at Chicago UIC
Axel Polleres
Axel Polleres
Giorgos Flouris
Giorgos Flouris
dc808b065733afaf3aa11d334f478b26f7ef527d
Helsinki University of Technology
Helsinki University of Technology
Oscar Corcho
Oscar Corcho
Lehigh University
Lehigh University
Does the Semantic Web Need Ontologies?
IBM Research
IBM Research
A Conflict-based Operator for Mapping Revision--Theory and and Implementation
National Institute of Information and Communications Technology (NICT)
National Institute of Information and Communications Technology (NICT)
Optimizing QoS-Aware Semantic Web Service Composition
Optimizing QoS-Aware Semantic Web Service Composition
Ranking and optimization of web service compositions are some of the most interesting challenges at present. Since web services can be enhanced with formal semantic descriptions, forming the semantic web services, it becomes conceivable to exploit the quality of semantic links between services (of any composition) as one of the optimization criteria. For this we propose to use the semantic similarities between output and input parameters of web services. Coupling this with other criteria such as quality of service (QoS) allow us to rank and optimize compositions achieving the same goal. Here we suggest an innovative and extensible optimization model designed to balance semantic fit (or functional quality) with non-functional QoS metrics. To allow the use of this model in the context of a large number of services as foreseen by the strategic EC-funded project SOA4All we propose and test the use of Genetic Algorithms.
Jemma Wu
Jemma Wu
Nokia Researcg
Nokia Researcg
Thomas Franz
Thomas Franz
4c8c2ba49c3566f72814a605458a4e89063838b5
Thomas Russ
Thomas Russ
Universität Zürich
Universität Zürich
Alain Barrat
Alain Barrat
Guus Schreiber
Guus Schreiber
Chiara Di Francescomarino
Chiara Di Francescomarino
Jeff Heflin
Jeff Heflin
Efficient Query Answering for OWL 2
University of Applied Sciences of Western Switzerland
University of Applied Sciences of Western Switzerland
Asunción Gómez-Pérez
Asunción Gómez-Pérez
Discovering and Maintaining Links on the Web of Data
Discovering and Maintaining Links on the Web of Data
The Web of Data is built upon two simple ideas: Employ the RDF data model to publish structured data on the Web and to create explicit data links between entities within different data sources. This paper presents the Silk - Linking Framework, a toolkit for discovering and maintaining data links between Web data sources. Silk consists of three components: 1. A link discovery engine, which computes links between data sources based on a declarative specification of the conditions that entities must fulfill in order to be interlinked; 2. A tool for evaluating the generated data links in order to fine-tune the linking specification; 3. A protocol for maintaining data links between continuously changing data sources. The protocol allows data sources to exchange both linksets as well as detailed change information and enables continuous link recomputation. The interplay of all the components is demonstrated within a life science use case.
Exploiting User Feedback to Improve Semantic Web Service Discovery
State-of-the-art discovery of Semantic Web services is based on hybrid algorithms that combine semantic and syntactic matchmaking. These approaches are purely based on similarity measures between parameters of a service request and available service descriptions, which, however, fail to completely capture the actual functionality of the service or the quality of the results returned by it. On the other hand, with the advent of Web 2.0, active user participation and collaboration has become an increasingly popular trend. Users often rate or group relevant items, thus providing valuable information that can be taken into account to further improve the accuracy of search results. In this paper, we tackle this issue, by proposing a method that combines multiple matching criteria with user feedback to further improve the results of the matchmaker. We extend a previously proposed dominance-based approach for service discovery, and describe how user feedback is incorporated in the matchmaking process. We evaluate the performance of our approach using a publicly available collection of OWL-S services.
Exploiting User Feedback to Improve Semantic Web Service Discovery
Social Trust Based Web Service Composition
Damian Steer
Damian Steer
Clement Jonquet
Clement Jonquet
b6b6a2c7e586632f2d654f537f6ab32af4d763ee
Luka Bradesko
Luka Bradesko
John Domingue
John Domingue
Humboldt-Universität zu Berlin
Humboldt-Universität zu Berlin
Humboldt-Universitaet zu Berlin
Humboldt-Universitaet zu Berlin
Executing SPARQL Queries over the Web of Linked Data
The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.
Executing SPARQL Queries over the Web of Linked Data
Timothy Finin
Timothy Finin
Alpesh Gajbe
Alpesh Gajbe
Kristina Lerman
Kristina Lerman
James Dobree
James Dobree
Multi-faceted Tagging in TagMe!
In this paper we present TagMe!, a tagging and exploration front-end for Flickr images, which enables users to add categories to tag assignments and to attach tag assignments to a specific area within an image. We analyze the differences between tags and categories and show how both facets can be applied to learn semantic relations between concepts referenced by tags and categories. Further, we discuss how the multi-faceted tagging helps to improve the retrieval of folksonomy entities. The TagMe! system is currently available at http://tagme.groupme.org
Multi-faceted Tagging in TagMe!
Emily Kigel
Emily Kigel
Vicky Papavassiliou
Vicky Papavassiliou
Franz Baader
Franz Baader
Exploiting User Feedback to Improve Semantic Web Service Discovery
Discovering and Maintaining Links on the Web of Data
SKOS2OWL: An Online Tool for Deriving OWL and RDF-S Ontologies from SKOS Vocabularies
Hierarchical classifications are available for many domains of interest. They often provide a large amount of category definitions and some sort of hierarchies. Thanks to their size and popularity, they are a promising ground for publishing and organizing data on the Semantic Web. Unfortunately, classifications can mostly not be directly used as ontologies in OWL, because they are not (or at least: very bad) ontologies. In particular, the labels in categories often lack a context-neutral notion of what it means to be an instance of that category, and the meaning of the hierarchical relations is often not a strict subClassOf. SKOS2OWL is an online tool that allows deriving consistent RDF-S or OWL ontologies from most hierarchical classifications available in the W3C SKOS exchange format. SKOS2OWL helps the user narrow down the intended meaning of the available categories to classes and guides the user through several modeling choices. In particular, SKOS2OWL can draw a representative random sample of relevant conceptual elements in the SKOS file and asks the user to make statements about their meaning. This can be used to make reliable modeling decisions without looking at every single element, which would be unfeasible for large classifications.
SKOS2OWL: An Online Tool for Deriving OWL and RDF-S Ontologies from SKOS Vocabularies
Automatically Constructing Semantic Web Services from Online Sources
Automatically Constructing Semantic Web Services from Online Sources
The work on integrating sources and services in the Semantic Web assumes that the data is either already represented in RDF or OWL or is available through a Semantic Web Service. In practice, there is a tremendous amount of data on the Web that is not available through the Semantic Web. In this paper we present an approach to automatically discover and create new Semantic Web Services. The idea behind this approach is to start with a set of known sources and the corresponding semantic descriptions and then discover similar sources, extract the source data, build semantic descriptions of the sources, and then turn them into Semantic Web Services. We implemented an end-to-end solution to this problem in a system called Deimos and evaluated the system across five different domains. The results demonstrate that the system can automatically discover, learn semantic descriptions, and build Semantic Web Services with only example sources and their descriptions as input.
Technical University of Cluj-Napoca
Technical University of Cluj-Napoca
Mark Cameron
Mark Cameron
Malte Isberner
Malte Isberner
Cosmin Stroe
Cosmin Stroe
Griffith University
Griffith University
Bijan Parsia
Bijan Parsia
Joel Sachs
Joel Sachs
Universidad Politécnica de Madrid (Ontology Engineering Group)
Universidad Politécnica de Madrid (Ontology Engineering Group)
Universidad Politécnica de Madrid
Universidad Politécnica de Madrid
Basuki Setio
Basuki Setio
Mamoru Ohta
Mamoru Ohta
Paolo Tonella
Paolo Tonella
Aba-Sah Dadzie
Aba-Sah Dadzie
Ontologies are tools for describing and structuring knowledge, with many applications in searching and analyzing complex knowledge bases. Since building them manually is a costly process, there are various approaches for bootstrapping ontologies automatically through the analysis of appropriate documents. Such an analysis needs to find the concepts and the relationships that should form the ontology. However, since relationship extraction methods are imprecise and cannot homogeneously cover all concepts, the initial set of relationships is usually inconsistent and rather imbalanced - a problem which, to the best of our knowledge, was mostly ignored so far. In this paper, we define the problem of extracting a consistent as well as properly structured ontology from a set of inconsistent and heterogeneous relationships. Moreover, we propose and compare three graph-based methods for solving the ontology extraction problem. We extract relationships from a large-scale data set of more than 325K documents and evaluate our methods against a gold standard ontology comprising more than 12K relationships. Our study shows that an algorithm based on a modified formulation of the dominating set problem outperforms greedy methods.
Graph-Based Ontology Construction from Heterogenous Evidences
Graph-Based Ontology Construction from Heterogenous Evidences
IBM Almaden Research Center
IBM Almaden Research Center
LMI
LMI
Eva Blomqvist
Eva Blomqvist
e7d5fd08545de88f7152dd93845da596cf00d4d8
Automatically Constructing Semantic Web Services from Online Sources
gOntt: a Tool for Scheduling Ontology Development Projects
The Ontology Engineering field lacks tools that guide ontology developers to plan and schedule their ontology development projects. gOntt helps ontology developers in two ways: (a) to schedule ontology projects; and (b) to execute such projects based on the schedule and using the NeOn Methodology.
gOntt: a Tool for Scheduling Ontology Development Projects
Rodney Topor
Rodney Topor
Jyotishman Pathak
Jyotishman Pathak
Optimizing QoS-Aware Semantic Web Service Composition
Zoran Jeremic
Zoran Jeremic
b22734dc298b2cc455126e777622d34baf0c746a
Syracuse University, Institute of Chemical Technology
Syracuse University, Institute of Chemical Technology
Syracuse University
Syracuse University
Jasper Tredgold
Jasper Tredgold
Learning to Classify Identity Web References using RDF Graphs
Learning to Classify Identity Web References using RDF Graphs
The need to monitor a person's web presence has risen in recent years due to identity theft and lateral surveillance becoming prevalent web actions. In this paper we present a machine learning-inspired bootstrapping approach to monitor identity web references that only requires as input an initial small seed set of data modelled as an RDF graph. We vary the combination of different RDF graph matching paradigms with different machine learning classifiers and observe the effects on the classification of identity web references. We present preliminary results of an evaluation in order to show the variation in accuracy of these different permutations.
Natalya Noy
Natalya Noy
Peter Mika
Peter Mika
c0d6551197a0295bfc604841a994d544e0091665
Composition Optimizer: A Tool for Optimizing Quality of Semantic Web Service Composition
Composition Optimizer: A Tool for Optimizing Quality of Semantic Web Service Composition
Ranking and optimization of web service compositions are some of the most interesting challenges at present. Since web services can be enhanced with formal semantic descriptions, forming the "semantic web services", it becomes conceivable to exploit the quality of semantic links between services (of any composition) as one of the optimization criteria. For this we propose to use the semantic similarities between output and input parameters of web services. Coupling this with other criteria such as quality of service (QoS) allow us to rank and optimize compositions achieving the same goal. We present the Composition Optimizer tool, using an innovative and extensible optimization model designed to balance semantic fit (or functional quality) with non-functional QoS metrics, in order to optimize service composition. To allow the use of this model in the context of a large number of services as foreseen by the EC-funded project SOA4All we propose and test the use of Genetic Algorithms.
Graph-Based Ontology Construction from Heterogenous Evidences
Semantic Web Enabled Software Engineering
Functions over RDF Language Elements
Functions over RDF Language Elements
RDF data are usually accessed using one of two methods: either, graphs are rendered in forms perceivable by human users (e.g., in tabular or in graphical form), which are difficult to handle for large data sets. Alternatively, query languages like SPARQL provide means to express information needs in structured form; hence they are targeted towards developers and experts. Inspired by the concept of spreadsheet tools, where users can perform relatively complex calculations by splitting formulas and values across multiple cells, we have investigated mechanisms that allow us to access RDF graphs in a more intuitive and manageable, yet formally grounded manner. In this paper, we make three contributions towards this direction. First, we present RDFunctions, an algebra that consists of mappings between sets of RDF language elements (URIs, blank nodes, and literals) under consideration of the triples contained in a background graph. Second, we define a syntax for expressing RDFunctions, which can be edited, parsed and evaluated. Third, we discuss Tripcel, an implementation of RDFunctions using a spreadsheet metaphor. Using this tool, users can easily edit and execute function expressions and perform analysis tasks on the data stored in an RDF graph.
University of Crete
University of Crete
Renaud Delbru
Renaud Delbru
Semantically Annotating RESTful Services with SWEET
This paper presents SWEET, the first tool developed for supporting users in creating semantic RESTful services by structuring service descriptions and associating semantic annotations with the aim to support a higher level of automation when performing common tasks with RESTful services, such as their discovery and composition.
Semantically Annotating RESTful Services with SWEET
ISIR, Osaka University
ISIR, Osaka University
BBN Technologies
BBN Technologies
Li Ding
Li Ding
a7d0967c21ba8952ab0443bf822750d4c44afe2c
Chemnitz University of Technology
Chemnitz University of Technology
Context and Domain Knowledge Enhanced Entity Spotting in Informal Text
A Case Study in Integrating Multiple E-commerce Standards via Semantic Web Technology
A Case Study in Integrating Multiple E-commerce Standards via Semantic Web Technology
Internet business-to-business transactions present great challenges in merging information from different sources. In this paper we describe a project to integrate four representative commercial classification systems with the Federal Cataloging System (FCS). The FCS is used by the US Defense Logistics Agency to name, describe and classify all items under inventory control by the DoD. Our approach uses the ECCMA Open Technical Dictionary (eOTD) as a common vocabulary to accommodate all different classifications. We create a semantic bridging ontology between each classification and the eOTD to describe their logical relationships in OWL DL. The essential idea is that since each classification has formal definitions in a common vocabulary, we can use subsumption to automatically integrate them, thus mitigating the need for pairwise mappings. Furthermore our system provides an interactive interface to let users choose and browse the results and more importantly it can translate catalogs that commit to these classifications using compiled mapping results.
Gianluca Correndo
Gianluca Correndo
Towards Soundness Preserving Approximation for TBox Reasoning in OWL 2
Towards Soundness Preserving Approximation for TBox Reasoning in OWL 2
Large scale semantic web applications require efficient and robust description logic (DL) reasoning services. In this paper, we present a soundness preserving tractable approximative reasoning approach for TBox reasoning in R, a fragment of OWL2-DL supporting ALC GCIs and role chains with 2ExpTime-hard complexity. We first rewrite the ontologies into EL+ with an additional complement table maintaining the complementary relations between named concepts, and then classify the approximation. Preliminary evaluation shows that our approach can classify existing benchmarks in large scale efficiently with a high recall.
John G. Breslin
John G. Breslin
Manuel Salvadores
Manuel Salvadores
Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes
Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes
RDF Schema (RDFS) as a lightweight ontology language is gaining popularity and, consequently, tools for scalable RDFS inference and querying are needed. SPARQL has become recently a W3C standard for querying RDF data, but it mostly provides means for querying simple RDF graphs only, whereas querying with respect to RDFS or other entailment regimes is left outside the current specification. In this paper, we show that SPARQL faces certain unwanted ramifications when querying ontologies in conjunction with RDF datasets that comprise multiple named graphs, and we provide an extension for SPARQL that remedies these effects. Moreover, since RDFS inference has a close relationship with logic rules, we generalize our approach to select a custom ruleset for specifying inferences to be taken into account in a SPARQL query. We show that our extensions are technically feasible by providing benchmark results for RDFS querying in our prototype system GiaBATA, which uses Datalog coupled with a persistent Relational Database as a back-end for implementing SPARQL with dynamic rule-based inference. By employing different optimization techniques like magic set rewriting our system remains competitive with state-of-the-art RDFS querying systems.
Eyal Oren
Eyal Oren
c626588229d59c4e58a57c57497239468212b824
Jozef Stefan Institute, Cyc Europe
Jozef Stefan Institute, Cyc Europe
Andreas Abecker
Andreas Abecker
SRI International
SRI International
Investigating the Semantic Gap through Query Log Analysis
All About That - A URI Profiling Tool for monitoring and preserving Linked Data
All About That - A URI Profiling Tool for monitoring and preserving Linked Data
All About That (AAT) is a URI Profiling tool which allows users to monitor and preserve Linked Data in which they are interested. Its design is based upon the principle of adapting ideas from hypermedia link integrity in order to apply them to the Semantic Web. As the Linked Data Web expands it will become increasingly important to maintain links such that the data remains useful and therefore this tool is presented as a step towards providing this maintenance capability.
Jens Lehmann
Jens Lehmann
Martin Hepp
Martin Hepp
49e06491d1c02eead2d362e2300fa56d24ed5213
Han Yu Li
Han Yu Li
Yuting Zhao
Yuting Zhao
Olaf Görlitz
Olaf Görlitz
Valentina Tamma
Valentina Tamma
Raymond Franz
Raymond Franz
Goal-Directed Module Extraction for Explaining OWL DL Entailments
Module extraction methods have proved to be effective in improving the performance of some ontology reasoning tasks, including finding justifications to explain why an entailment holds in an OWL DL ontology. However, the existing module extraction methods that compute a syntactic locality-based module for the sub-concept in a subsumption entailment, though ensuring the resulting module to preserve all justifications of the entailment, may be insufficient in improving the performance of finding all justifications. This is because a syntactic locality-based module is independent of the super-concept in a subsumption entailment and always contains all concept/role assertions. In order to extract smaller modules to further optimize finding all justifications in an OWL DL ontology, we propose a goal-directed method for extracting a module that preserves all justifications of a given entailment. Experimental results on large ontologies show that a module extracted by our method is smaller than the corresponding syntactic locality-based module, making the subsequent computation of all justifications more scalable and more efficient.
Goal-Directed Module Extraction for Explaining OWL DL Entailments
Chinese Academy of Sciences
Chinese Academy of Sciences
Optimizing Web Service Composition while Enforcing Regulations
Optimizing Web Service Composition while Enforcing Regulations
To direct automated Web service composition, it is compelling to provide a template, workflow or scaffolding that dictates the ways in which services can be composed. In this paper we present an approach to Web service composition that builds on work using AI planning, and more specifically Hierarchical Task Networks (HTNs), for Web service composition. A significant advantage of our approach is that it provides much of the how-to knowledge of a choreography while enabling customization and optimization of integrated Web service selection and composition based upon the needs of the specific problem, the preferences of the customer, and the available services. Many customers must also be concerned with enforcement of regulations, perhaps in the form of corporate policies and/or government regulations. Regulations are traditionally enforced at design time by verifying that a workflow or composition adheres to regulations. Our approach supports customization, optimization and regulation enforcement all at composition construction time. To maximize efficiency, we have developed novel search heuristics together with a branch and bound search algorithm that enable the generation of high quality compositions with the performance of state-of-the-art planning systems.
Dominic DiFranzo
Dominic DiFranzo
TU Dresden
TU Dresden
Carlos Pedrinaci
Carlos Pedrinaci
Gideon Zenz
Gideon Zenz
Vrije Universiteit Amsterdam
Vrije Universiteit Amsterdam
VU University Amsterdam
VU University Amsterdam
Johan Stapel
Johan Stapel
Hendrike Peetz
Hendrike Peetz
Freddy Lécué
Freddy Lécué
bdb276156a4eda9e5e5b69f9e418980d943a40e0
Freddy Lecue
Freddy Lecue
Living Web: Making Web Diversity a true asset
Les Carr
Les Carr
Bouke Huurnink
Bouke Huurnink
A Decomposition-based Approach to Optimizing Conjunctive Query Answering in OWL DL
Scalable query answering over Description Logic (DL) based ontologies plays an important role for the success of the Semantic Web. Towards tackling the scalability problem, we propose a decomposition-based approach to optimizing existing OWL DL reasoners in evaluating conjunctive queries in OWL DL ontologies. The main idea is to decompose a given OWL DL ontology into a set of target ontologies without duplicated ABox axioms so that the evaluation of a given conjunctive query can be separately performed in every target ontology by applying existing OWL DL reasoners. This approach guarantees sound and complete results for the category of conjunctive queries that the applied OWL DL reasoner correctly evaluates. Experimental results on large benchmark ontologies and benchmark queries show that the proposed approach can significantly improve scalability and efficiency in evaluating general conjunctive queries.
A Decomposition-based Approach to Optimizing Conjunctive Query Answering in OWL DL
Dirk Kramer
Dirk Kramer
8th International Semantic Web Conference
TripleRank: Ranking Semantic Web Data By Tensor Decomposition
Boris Motik
Boris Motik
Mark Musen
Mark Musen
Mark A Musen
Mark A Musen
Zoltan Padrah
Zoltan Padrah
Lalana Kagal
Lalana Kagal
Valentina Presutti
Valentina
Presutti
L3S Research Center
L3S Research Center, Germany
L3S Research Center
L3S Research Center, Germany
Korea Institute of Science and Technology Information
Korea Institute of Science and Technology Information
Dunja Mladenic
Dunja Mladenic
Kerstin Denecke
Kerstin Denecke
084d28fe6678e6db899c49fe9b9b5bce1cb25fad
Rajiv Nair
Rajiv Nair
Fabien Gandon
Fabien Gandon
Terrance Swift
Terrance Swift
STARS Semantic Tools for Screen Arts Research
STARS is an open source e-research tool that enables screen arts researchers to browse, annotate and replay moving image content in order to better understand its thematic links to those people and communities involved in all aspects of its creation. The STARS software was built using Semantic Web technologies to address the technical challenges of integrated searching, browsing and visualisation across curated core data and user-contributed annotations.
STARS Semantic Tools for Screen Arts Research
Semantically-aided business process modeling
Olivier Corby
Olivier Corby
Christian Kubczak
Christian Kubczak
4d99401dd6369acf0324b59187eb785669caad28
Synthesizing Semantic Web Service Compositions with jMosel and Golog
Mihael Mohorcic
Mihael Mohorcic
Irini Fundulaki
Irini Fundulaki
Jelena Jovanovic
Jelena Jovanovic
Thomas Krennwallner
Thomas Krennwallner
4fbf23f97eb1c36561a72379cea71095200fbd60
Martin Szomszor
Martin Szomszor
Philip Nguyen
Philip Nguyen
Martín Vigo
Martín Vigo
Simon Price
Simon Price
e8b95a47e8881222790b2044456a665cf112be43
Matt Fisher
Matt Fisher
Towards Lightweight and Robust Large Scale Emergent Knowledge Processing
Matthias Knorr
Matthias Knorr
Mikyoung Lee
Mikyoung Lee
Hiroko Kou
Hiroko Kou
Telecom ParisTech Paris France
Telecom ParisTech Paris France
Ganesh Gajre
Ganesh Gajre
Achille Fokoue
Achille Fokoue
Kemafor Anyanwu
Kemafor Anyanwu
8caa6f23c7e9f622ab0b047a4850f1b32a505ae0
Deborah McGuinness
Deborah McGuinness
Beom-Jong You
Beom-Jong You
Xiaoyuan Wang
Xiaoyuan Wang
Exploiting Partial Information in Taxonomy Construction
Qiu Ji
Qiu Ji
Institute of Computer Science (ICS), FORTH and University of Crete
Institute of Computer Science (ICS), FORTH and University of Crete
Stefan Schlobach
Stefan Schlobach
Smitashree Choudhury
Smitashree Choudhury
OntoCase - Automatic Ontology Enrichment Based on Ontology Design Patterns
OntoCase - Automatic Ontology Enrichment Based on Ontology Design Patterns
OntoCase is a framework for semi-automatic pattern-based ontology construction. In this paper we focus on the retain and reuse phases, where an initial ontology is enriched based on content ontology design patterns (Content ODPs), and especially the implementation and evaluation of these phases. Applying Content ODPs within semiautomatic ontology construction, i.e. ontology learning (OL), is a novel approach. The main contributions of this paper are the methods for pattern ranking, selection, and integration, and the subsequent evaluation showing the characteristics of ontologies constructed automatically based on ODPs. We show that it is possible to improve the results of existing OL methods by selecting and reusing Content ODPs. OntoCase is able to introduce a general top structure into the ontologies, and by exploiting background knowledge the ontology is given a richer overall structure.
Maarten de Rijke
Maarten de Rijke
Christophe Guéret
Christophe Guéret
Matthew Horridge
Matthew Horridge
29c0127871609390f3d73c72c9418cc2936ac0f8
Ahmed Serhrouchni
Ahmed Serhrouchni
Alex Stolz
Alex Stolz
Mohammad Reza Tazari
Mohammad Reza Tazari
837af293870af4d2857fe1765f0186fe0d686308
Paul Doran
Paul Doran
Dan Wolfson
Dan Wolfson
Live Social Semantics
Live Social Semantics
Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web~2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.
Universität Koblenz-Landau
Universität Koblenz-Landau
University of Koblenz-Landau
University of Koblenz-Landau
Feng Shi
Feng Shi
20751a132d1a149de108783c385979293bf7b332
Modeling and Query Patterns for Process Retrieval in OWL
Bart Brinkman
Bart Brinkman
Tuukka Hastrup
Tuukka Hastrup
Tetherless World Mobile Wine Agent: An Application for Semantics on Mobile Devices
The Tetherless World Mobile Wine Agent integrates semantics, geolocation, and social networking on a low-power, mobile platform to provide a unique food and wine recommender system. It provides a robust user interface that allows users to describe a wealth of information about foods and wines as OWL classes and instances and it allows users to share these descriptions with their friends via custom URIs. This demonstration will examine how the user interface simplifies generating RDF data, how location services such as GPS can simplify reasoning (reducing the ABox due to context-sensitive information), and how users of the Mobile Wine Agent can utilize social networking tools such as Facebook and Twitter to share content with others over the World Wide Web.
Tetherless World Mobile Wine Agent: An Application for Semantics on Mobile Devices
Sirish Darbha
Sirish Darbha
Ciro Cattuto
Ciro Cattuto
Knud Möller
Knud Möller
b15d1e7efb11374644555fa9734bf75a553a362c
Gem Stapleton
Gem Stapleton
2ffe006a99d1aca6904cb094c26aa1eab609d12b
Leibniz University Hannover
Leibniz University Hannover
Shirin Sohrabi
Shirin Sohrabi
fa1f8ceaa83076619cf1ba8cd4b859a22607a890
Using Naming Authority to Rank Data and Ontologies for Web Search
Using Naming Authority to Rank Data and Ontologies for Web Search
The focus of web search is moving away from returning relevant documents towards returning structured data as results to user queries. A vital part in the architecture of search engines are link-based ranking algorithms, which however are targeted towards hypertext documents. Existing ranking algorithms for structured data, on the other hand, require manual input of a domain expert and are thus not applicable in cases where data integrated from a large number of sources exhibits enormous variance in vocabularies used. In such environments, the authority of data sources is an important signal that the ranking algorithm has to take into account. This paper presents algorithms for prioritising data returned by queries over web datasets expressed in RDF. We introduce the notion of naming authority which provides a correspondence between identifiers and the sources which can speak authoritatively for these identifiers. Our algorithm uses the original PageRank method to assign authority values to data sources based on a naming authority graph, and then propagates the authority values to identifiers referenced in the sources. We conduct performance and quality evaluations of the method on a large web dataset. Our method is schema-independent, requires no manual input, and has applications in search, query processing, reasoning, and user interfaces over integrated datasets.
OntoCase - Automatic Ontology Enrichment Based on Ontology Design Patterns
Finding Semantic Web Ontology Terms from Words
Finding Semantic Web Ontology Terms from Words
The Semantic Web was designed to unambiguously define and use ontologies to encode data and knowledge on the Web. Many people find it difficult, however, to write complex RDF statements and queries because it requires familiarity with the appropriate ontologies and the terms they define. We describe a framework that eases the experiences in authoring and querying RDF data, in which we focus on automatically finding a set of appropriate Semantic Web ontology terms from a set of words used as the labels of nodes and edges in an incoming semantic graph.
Daniel Miranker
Daniel Miranker
Johannes Kepler University Linz
Johannes Kepler University Linz
Using Naming Authority to Rank Data and Ontologies for Web Search
SM Hazzaz Imtiaz
SM Hazzaz Imtiaz
Zhixiong Zhang
Zhixiong Zhang
Ian Millard
Ian Millard
Wouter Van den Broeck
Wouter Van den Broeck
Semantics: Reasoning and Provenance
Alessandra Martello
Alessandra Martello
Ontology Patterns
Lourens van der Meij
Lourens van der Meij
Gihyun Gong
Gihyun Gong
University of Texas at Austin
University of Texas at Austin
orange labs
orange labs
Emanuele Della Valle
Emanuele Della Valle
Martin Knechtel
Martin Knechtel
Uncertainty Reasoning for the Semantic Web
Andreas Langegger
Andreas Langegger
Landong Zuo
Landong Zuo
294d7e9e8ec1c2781bd0a11abf20d2b3e00a13f6
Li Ma
Li Ma
9cd13ac912bb0af12b825511b978ce01314a5eb5
Shenghui Wang
Shenghui Wang
Harvard Medical School, Department of Neurology, Boston, MA, USA
Harvard Medical School, Department of Neurology, Boston, MA, USA
Margaret-Anne Storey
Margaret-Anne Storey
Steffen Staab
Steffen Staab
Hasso Plattner Institut, Potsdam
Hasso Plattner Institut, Potsdam
David Martin
David Martin
Semantic RPC via Queries
Semantic RPC via Queries
A vision of the Semantic Web is to facilitate global software interoperability. Many approaches and specifications are available that work towards realization of this vision: Service-oriented architectures (SOA) provide a good level of abstraction for interoperability; Web Services provide programmatic interfaces for application to application communication in SOA; there are ontologies that can be used for machine-readable description of service semantics. What is still missing is a standard for constructing semantically formulated service requests that solely rely on shared domain ontologies without depending on programatic or even semantically described interfaces. emph{Semantic RPC} would then include the whole process from issuing such a request, matchmaking with semantic profiles of available and accessible services, deriving input parameters for the matched service(s), calling the service(s), getting the results, and mapping back the results onto an appropriate response to the original request. The standard must avoid realization-specific assumptions so that frameworks supporting semantic RPC can be built for bridging the gap between the semantically formulated service requests and matched programmatic interfaces. This poster introduces a candidate solution to this problem by outlining a query language for semantic service utilization based on an extension of the OWL-S ontology for service description.
MITRE
MITRE
A Conflict-based Operator for Mapping Revision--Theory and and Implementation
A Conflict-based Operator for Mapping Revision--Theory and and Implementation
Ontology matching is one of the key research topics in the field of the Semantic Web. There are many matching systems that generate mappings between different ontologies either automatically or semi-automatically. However, the mappings generated by these systems may be inconsistent with the ontologies. Several approaches have been proposed to deal with the inconsistencies between mappings and ontologies. This problem is often called a mapping revision problem, as the ontologies are assumed to be correct, whereas the mappings are repaired when resolving the inconsistencies. In this paper, we first propose a conflict-based mapping revision operator and show that it can be characterized by two logical postulates adapted from some existing postulates for belief base revision. We then provide an algorithm for iterative mapping revision by using an ontology revision operator and show that this algorithm defines a conflict-based mapping revision operator. Three concrete ontology revision operators are given to instantiate the iterative algorithm, which result in three different mapping revision algorithms. We implement these algorithms and provide some preliminary but interesting evaluation results.
Tom Heath
Tom Heath
Laura Dragan
Laura Dragan
Andrea Giovanni Nuzzolese
Andrea Giovanni
Nuzzolese
ISWC2009 Research Track
Oshani Seneviratne
Oshani Seneviratne
a3f8fd9e643c70000e200a3e9432254212ae5657
Feng Cao
Feng Cao
Nigam Shah
Nigam Shah
Nigam H. Shah
Nigam H. Shah
Tudor Groza
Tudor Groza
75691f9b8834cf1e1552893d01c7bc0dca6136ee
Willem Robert van Hage
Willem Robert van Hage
6b25dea02fbc6b6663e71b15efd548f0b17f8e0d
Open Ontology Repository
The Open Ontology Repository is an open source effort to develop infrastructure for ontologies that is federated, robust and secure. This article describes the purpose, requirements and goals of this initiative.
Open Ontology Repository
Terra Cognita
Stefan Decker
Stefan Decker
Hugo Zaragoza
Hugo Zaragoza
SAP Research CEC Dresden
SAP Research CEC Dresden
Benjamin Coe
Benjamin Coe
Nenad Stojanovic
Nenad Stojanovic
We present SaHaRa, a system that helps to discover and analyze the relationship between entities and topics in large collections of news articles. We augment entity related search by including semantically related linked open data.
SaHaRa: Discovering Entity-Topic Associations in Online News
SaHaRa: Discovering Entity-Topic Associations in Online News
Knowledge based conference video-recordings visualization
Knowledge based conference video-recordings visualization
The advent of technologies in information retrieval driven by users requests calls for an effort to conceive and develop semantic-based applications. In recent years the semantic web gave place for a new generation of search query engines that rely on the semantic of the documents expressed by metadata. In this paper we present a knowledge-based approach to visualizing and navigating through conference video-recordings. This approach is based on a conference ontology that models the information conveyed within a conference life cycle.
Ontology Matching
Mayo Clinic, Division of Biomedical Statistics and Informatics
Mayo Clinic, Division of Biomedical Statistics and Informatics
Enrichment and Ranking of the YouTube Tag Space and Integration with the Linked Data Cloud
The increase of personal digital cameras with video functionality and video-enabled camera phones has increased the amount of user-generated videos on the Web. People are spending more and more time viewing online videos as a major source of entertainment and infotainment. Social websites allow users to assign shared free-form tags to user-generated multimedia resources, thus generating annotations for objects with a minimum amount of effort. Tagging allows communities to organize their multimedia items into browseable sets, but these tags may be poorly chosen and related tags may be omitted. Current techniques to retrieve, integrate and present this media to users are deficient and could do with improvement. In this paper we describe a framework for semantic enrichment, ranking and integration of web video tags using Semantic Web technologies. Semantic enrichment of folksonomies can bridge the gap between the uncontrolled and flat structures typically found in user-generated content and structures provided by the Semantic Web. The enhancement of tag spaces with semantics has been accomplished through two major tasks: (1) a tag space expansion and ranking step; and (2) through concept matching and integration with the Linked Data cloud. We have explored social, temporal and spatial contexts to enrich and extend the existing tag space. The resulting semantic tag space is modelled via a local graph based on co-occurrence distances for ranking. A ranked tag list is mapped and integrated with the Linked Data cloud through the DBpedia resource repository. Multi-dimensional context filtering for tag expansion means that tag ranking is much easier and it provides less ambiguous tag to concept matching.
Enrichment and Ranking of the YouTube Tag Space and Integration with the Linked Data Cloud
Sangkeun Lee
Sangkeun Lee
1349774e69a38dad576eeb9c1a876584db12e2bf
Florian Probst
Florian Probst
Massachusetts General Hospital, Department of Neurology, Boston, MA, USA
Massachusetts General Hospital, Department of Neurology, Boston, MA, USA
Stony Brook University
Stony Brook University
Like web services, semantically-operated services can be assembled to construct a new composite service. For this, we designed the semantic broker that searches semantic services matched with given conditions, assembles them to dynamically generate pipelines of semantic services, and execute the pipelines. By executing the resulting pipelines, the user can select one which he/she really intended. In this way, our system can help the user who wants to design new semantically-operated services by mashing up the existing semantically-operated services.
OntoPipeliner: A Semantic Broker-based Manager for Pipelining Semantically-operated Services
OntoPipeliner: A Semantic Broker-based Manager for Pipelining Semantically-operated Services
One of the main software engineers competencies, solving software problems, is most effectively acquired through an active examination of learning resources and work on real-world examples in small development teams. This obviously indicates a need for an integration of several existing learning tools and systems in a common collaborative learning environment, as well as advanced educational services that provide students with right in time advice about learning resources and possible collaboration partners. In this paper, we present how we developed and applied a common ontological foundation for the integration of different existing learning tools and systems in a common learning environment called DEPTHS (Design Patterns Teaching Help System). In addition, we present a set of educational services that leverages semantic rich representation of learning resources and students interaction data to recom-mend resource relevant for students current learning context.
Semantic Web Technologies for the Integration of Learning Tools and Context-aware Educational Services
Semantic Web Technologies for the Integration of Learning Tools and Context-aware Educational Services
Maarten van Someren
Maarten van Someren
Vassilis Christophides
Vassilis Christophides
Semantic Web Service Composition in Social Environments
Semantic Web Service Composition in Social Environments
David Manzolillo
David Manzolillo
b5ee378b72c422e50996708613f3d33cf5039f71
Craig A. Knoblock
Craig A. Knoblock
RDA and the Open Metadata Registry
RDA and the Open Metadata Registry
As more and more of the world's databases are opened to the Semantic Web as linked data, there is a growing awareness of the need for upper-level ontologies and RDF vocabularies to support the dissemination of this data. For more than 150 years libraries have been developing standards for describing resources contained in the world's libraries. This year, for the first time in its long history, the library community is making that experience and knowledge freely available as a coordinated set of controlled vocabularies and upper-level ontologies. Resource Description and Access (RDA) is the international library community's new standard for resource description. A component of this standard -- the RDA Vocabularies -- will finally allow libraries to make the vast silos of library and museum metadata publicly available as semantically rich linked data, and provide the semantic web and linked data communities access to more than a century of library experience in describing resources. The Open Metadata Registry is hosting these vocabularies. The Registry is an Open Source, non-commercial project specifically designed to provide individuals, communities, and organizations an easy-to-use platform supporting the development and dissemination of multi-lingual controlled vocabularies and upper-level and domain-specific ontologies. This demo, poster and related handouts will introduce Resource Description and Access (RDA) and the Open Metadata Registry vocabulary development platform to the Semantic Web Community.
The AgreementMaker system for ontology matching includes an extensible architecture, which facilitates the integration and performance tuning of a variety of matching methods, an evaluation mechanism, which can make use of a reference matching or rely solely on quality measures, and a multi-purpose user interface, which drives both the matching methods and the evaluation strategies. Our demo focuses on the tight integration of matching methods and evaluation strategies, a unique feature of our system.
Integrated Ontology Matching and Evaluation
Integrated Ontology Matching and Evaluation
Antoine Isaac
Antoine Isaac
b3f1e62e330cd51c8d2bc7d275d11ed9f4345caa
Stefan Zander
Stefan Zander
The QL profile of OWL 2 has been designed so that it is possible to use database technology for query answering via query rewriting. We present a comparison of our resolution based rewriting algorithm with the standard algorithm proposed by Calvanese et al., implementing both and conducting an empirical evaluation using ontologies and queries derived from realistic applications. The results indicate that our algorithm produces significantly smaller rewritings in most cases, which could be important for practicality in realistic applications.
Efficient Query Answering for OWL 2
Efficient Query Answering for OWL 2
Shonali Krishnaswamy
Shonali Krishnaswamy
Daniele Dell'Aglio
Daniele Dell'Aglio
Management: Database Technologies
Ontologies play a central role for the formal representation of knowledge on the Semantic Web. A major challenge in collaborative ontology construction is to handle inconsistencies caused by changes to the ontology. In this paper, we present our CoDR system which helps to diagnose and repair collaboratively constructed ontologies. CoDR integrates RaDON, an ontology diagnosis and repair tool, and Cicero, which provides discussion functionality for the ontology developers. CoDR is realized as a plugin for the NeOn Toolkit. It helps to use discussions held in Cicero as context information during repairing an ontology with RaDON. But it is also possible to use the diagnoses from RaDON during the discussions in Cicero.
CoDR: A Contextual Framework for Diagnosis and Repair
CoDR: A Contextual Framework for Diagnosis and Repair
Chen Wang
Chen Wang
Gerd Groener
Gerd Groener
0b39da82cd110b143beb5837931d01f0bdf4c60c
Vienna University of Technology
Vienna University of Technology
Christian Bizer
Chris Bizer
Christian Bizer
Chris Bizer
Discovering Semantics
The exponential growth of the World Wide Web in the last decade, brought an explosion in the information space, which has important consequences also in the area of scientific research. Thus, finding relevant work in a particular field and exploring the links between publications is quite a cumbersome task. Similarly, on the desktop, managing the publications acquired over time can represent a real challenge. Extracting semantic metadata, exploring the linked data cloud and using the semantic desktop for managing personal information represent, in part, solutions for different aspects of the above mentioned issues. In this poster/demo, we show an innovative approach for bridging these three directions with the overall goal of alleviating the information overload problem burdening early stage researchers.
sClippy: Connecting Personal Information and Linked Open Data
sClippy: Connecting Personal Information and Linked Open Data
University of Southampton
University of Southampton
School of Electronic and Computer Science, University of Southampton, U.K.
ECS, University of Southampton
School of Electronic and Computer Science, University of Southampton, U.K.
ECS, University of Southampton
Jobe Microsystems, University of Western Ontario
Jobe Microsystems, University of Western Ontario
Sheila Kinsella
Sheila Kinsella
FBK-irst
FBK-irst
Yuan Ren
Yuan Ren
bccb6cdc2f58d812d5d71b6a39fa756681dd369f
John Howse
John Howse
Task Oriented Evaluation of Module Extraction Techniques
Ontology Modularization techniques identify coherent and often reusable regions within an ontology. The ability to identify such modules, thus potentially reducing the size or complexity of an ontology for a given task or set of concepts is increasingly important in the Semantic Web as domain ontologies increase in terms of size, complexity and expressivity. To date, many techniques have been developed, but evaluation of the results of these techniques is sketchy and somewhat ad hoc. Theoretical properties of modularization algorithms have only been studied in a small number of cases. This paper presents an empirical analysis of a number of modularization techniques, and the modules they identify over a number of diverse ontologies, by utilizing objective, task-oriented measures to evaluate the fitness of the modules for a number of statistical classification problems.
Task Oriented Evaluation of Module Extraction Techniques
Nick Kanellos
Nick Kanellos
In this work, we describe our approach on how to deal with tag ambiguity in tagging systems and how to enable a sense aware or semantic search. The sense aware search is realized by means of a Sense Repository which returns for given terms a list of potential senses. This list is then presented to the user of the cross-folksonomy search engine MyTag so that he can explicitly select the sense he wants to search for. The search results are then ranked according to this sense so that relevant resources appear higher in the result list.
Sense Aware Searching and Exploration with MyTag
Sense Aware Searching and Exploration with MyTag
Hai Feng Liu
Hai Feng Liu
AIFB, University Karlsruhe School of Computer Science and Engineering, Southeast University
AIFB, University Karlsruhe School of Computer Science and Engineering, Southeast University
AIFB, University Karlsruhe
AIFB, University Karlsruhe
Mike Jones
Mike Jones