ESWC 2016 Workshops and Tutorials

Tentative schedule for ESWC16 Workshops and Tutorials* (last updated: 2015-12-22):


MAY 29th
MORNING
MAY 29th
AFTERNOON
MAY 30th
MORNING
MAY 30th
AFTERNOON
09:00-12:30 14:00-17:30 09:00-12:30 14:00-17:30

DOREMUS

RDF Benchmarks

RDF validation

LOD-Lab

Relation Extraction

LDF & Dev Hackshop​

Data Quality and Scalability

 

User Model Enrichment

Instance Matching Benchmarks
    RichMedSem

SEMPER

Know@LOD

and

CoDeS

LDQ

SALAD

PROFILES

LiME

WHiSE

MEPDaW

SumPre

EMSASW

SW4SH

* Although we will do our best to keep this schedule, please notice that this is currently a draft and changes may be applied according to organizational needs.

 Tutorial         
 Workshop              
Workshop and Tutorial

Tutorial: A Tutorial on Instance Matching Benchmarks  
Abstract:
With the continuously increasing number of datasets published in the Web of Data and form part of the Linked Open Data Cloud, it becomes more and more essential to identify resources that correspond to the same real world object in order to interlink web resources and set the basis for large-scale data integration. This requirement becomes apparent in a multitude of domains ranging from science (marine research, biology, astronomy, pharmacology) to semantic publishing and cultural domains. In this context, instance matching (also referred to as record linkage, duplicate detection entity resolution, and object identification in the context of databases) is of crucial importance. It is though essential at this point to develop, along with instance and entity matching systems, benchmarks to determine the weak and strong points of those systems, as well as their overall quality in order to support users in deciding the system to use for their needs. Hence, well defined, and good quality benchmarks are important for comparing the performance of the developed instance matching systems. In this tutorial we will present the instance matching benchmarks that have been developed for Semantic Web data. A benchmark is, generally speaking, a set of tests against which the performance of a system is measured. Benchmarks are used not only to inform users of the strengths and weaknesses of systems, but also to encourage the technology vendors to deal with the drawbacks of their systems and to ameliorate their performance and functionality.
 
 
Website:
 

 

Tutorial: Assessing the performance of RDF Engines: Discussing RDF Benchmarks
Abstract:
In this tutorial, we focus our attention on benchmarking and we limit our scope to RDF, which is the latest data exchange format to gain traction for representing information in the Semantic Web. Our interest in RDF is well-timed for two reasons:

  • there is a proliferation of RDF systems, and identifying the strong and weak points of these systems is important to support users in deciding the system to use for their needs;
  • surprisingly, there is a similar proliferation of RDF bechmarks, a development that adds to the confusion since it is not clear which benchmark(s) should one use (or trust) to evaluate existing, or new, systems.

Benchmarks can be used to inform users of the strengths and weaknesses of competing software products, but more importantly, they encourage the advancement of technology by providing both academia and industry with clear targets for performance and functionality. Given the multitude of usage scenarios of RDF systems, one can ask the following questions:

  • how can one come up with the right benchmark that accurately captures all these use cases?
  • How can a benchmark capture the fact that RDF data are used to represent the whole spectrum of data, from structured (with relational data converted to RDF), semi-structured (with XML data converted to RDF) and completely natively unstructured graph data?
  • How can a benchmark capture the different data and query patterns and provide a consistent picture for system behavior across different application settings?
  • When one benchmark does not suffice and multiple ones are needed, how can one pick the right set of benchmarks to try?

These are particularly hard questions whose answers require both an in-depth understanding of the domains where RDF is used, and an in-depth understanding of which benchmarks are appropriate for what domains. In this tutorial, we provide some guidance in this respect by discussing the state-of-the-art RDF, and if time permits, graph benchmarks.

 
 
Website:
 

 

Tutorial: RDF and Linked Data validation
Abstract:
RDF promises a distributed database of repurposable, machine-readable data. Although the benefits of RDF for data representation and integration are indisputable, it has not been embraced by everyday programmers and software architects who care about safely creating and accessing well-structured data. Semantic web projects still lack some common tools and methodologies that are available in more conventional settings to describe and validate data. In particular, relational databases and XML have popular technologies for defining data schemas and validating data. These currently have no analog in RDF.

Shape Expressions (ShEx) has been designed as an intuitive and human-friendly high level language for RDF validation.

In 2014, the W3c chartered a working group called RDF Data Shapes to produce a language for defining structural constraints on RDF graphs. The proposed technology has been called SHACL (Shapes Constraint Language) and a first public working draft has been published in October, 2015.

In this tutorial we will present both ShEx and SHACL using examples and RDF data modelling exercises.

Like the popular SPARQL by example tutorial, this tutorial includes step-by-step instructions with examples followed by exercises. Participants can download validation tools to use locally or use web-based interfaces like RDFShape or W3C ShEx Workbench.

 
 
Website:

 

 

Tutorial: Tutorial on Linked Data Fragments 

LDF and ESWC DevWS have been joined

 

 

Abstract:
The Developers Workshop and the LDF tutorial put their hands together for this merged event: the Developers Hackshop! We present a full day mixed with interesting talks on development subjects, tutorials and an actual Hackathon. At the end of the day, we plan to see some real code. We still welcome subjects for our Hackathon: Submit your project now!
 
 
Website:
 

 

 

Tutorial: Join the LOD Lab! Scale your Linked Data evaluations to the Web
Abstract:
This tutorial will focus on obtaining hands-on experience with LOD Lab: a new evaluation paradigm that makes algorithmic evaluation against hundreds of thousands of datasets the new norm. The LOD Lab approach builds on the award-winning LOD Laundromat architecture, HDT technology and LOTUS text index. The intended audience for this tutorial includes all Semantic Web practitioners that need to run evaluations on Linked Open Data or that would otherwise benefit from being able to easily process large volumes of Linked Data.
 
 
Website:
 

 

 

Tutorial: From linguistic predicate-arguments to Linked Data and ontologies: Extracting n-ary relations
Abstract:
In this tutorial, we will give a comprehensive overview and hands-on training on the use of linguistic predicate-arguments and lexical resources for extracting n-ary relations from text as a means to populate the Semantic Web. More specifically, in terms of theoretical background the tutorial will cover:

  • representative predicate-argument extraction paradigms, namely Combinatory Categorical Grammar (CCG) and Dependency Grammar (DG), elaborating on corresponding implications regarding the expressiveness and completeness of the subsequently generated linked data;
  • linguistic resources, the main focus being on FrameNet, as backbone for delineating meaning in the extracted predicates and their participating arguments;
  • presentation of state-of-the-art frame detection systems in order to illustrate and foster understanding of the strengths and weaknesses incurred by the different predicate-arguments paradigms;
  • vocabularies and models for generating Linked Data and ontologies.
 
 
Website:
 

 
Tutorial: Improving Quality and Scalability in Semantic Data Management
Abstract:
This tutorial offers a walk through several techniques in the field of Semantic Data Management. These techniques stem across coping with semantic data provenance and ontology learning, dealing with dynamics and evolution; improving quality of semantic data and linked data in decentralized settings, at scale, allowing trust and autonomy, guaranteeing consistency; improving quality by applying multi­type and large­scale entity resolution, also at scale; and finally enabling feasible assertion of new knowledge by applying large scale parallel reasoning. Datasets developed under the context of the EU IRSES project SemData will be used as running examples throughout the tutorial.

This tutorial has a goal to present and discuss several complementary techniques, developed in frame of the SemData project, which are important to help solve the challenges of improving quality and scalability of Semantic Data Management, giving an account to distributed nature, dynamics and evolution of the data sets. In particular, the objectives are:

  • Refined temporal representation: to present the developments regarding the required temporal features which help cope with dataset dynamics and evolution.
  • Ontology learning and knowledge extraction​: to demonstrate the methodology allowing robustly extracting a consensual set of community requirements from the relevant professional document corpus; to refine the ontology and evaluate the quality of this refinement.
  • Distribution, autonomy, consistency and trust​: to present the approach to implement a Read/Write Linked Open Data coping with participants autonomy and trust at scale.
  • Entity resolution​: to discuss how traditional entity resolution workflows have to be revisited in order to cope with the new challenges stemming from the Web openness and heterogeneity, data variety, complexity, and scale.
  • Large scale reasoning​: overview the variety of existing platforms and techniques that allow parallel and scalable data processing, enabling large­scale reasoning based on rule and data partitioning over various logics.
 
 
Website:
 

 

 

Tutorial: User Model Enrichment using the Social and Semantic Web 
Abstract:
User Modeling (UM) is a cross-disciplinary research topic that can be studied from different perspectives and disciplines, including human-computer interaction, artificial intelligence, psychology and design.

This tutorial will focus on the role of user data, knowledge bases and reasoning techniques User Modeling. UM methods aim to create digital representations of users, and then adapt the interface or the content based on these models.

The tutorial will first provide an overview of the research in the area, presenting the evolution of User Modeling methods over the years, focusing on the role of data and methods from the social and semantic Web in the creation and enrichment of user models. Examples from most relevant state-of-art work will be presented.

Then, we will describe methods and techniques to be used in the User Modeling process to obtain user data, to enrich user data with person-related information as well as data from the Linked Open Data cloud, and to reason about these models. We will give a brief overview on methods for representing user models, how the models are used for providing recommendations and for evaluating these models.

Finally, a practical exercise on user model enrichment is presented.

 
 
Website:
 

 

 

Tutorial: Describe Musical Works and Events for the LOD
Abstract:
Music is everywhere. Files of recorded music are spread all over the web. Yet, despite the fact that this knowledge is described in sometimes great details in information systems of many cultural and media institutions around the world, nothing is harder than to find the history of a musical piece, its composer, its cultural origins, its lyricists and influences, its covers and interpretations, etc.

DOREMUS is a research project that aims to develop tools and methods to describe, publish, interconnect and contextualize music catalogues on the web using semantic web technologies. Its primary objective is to provide common knowledge models and shared multilingual controlled vocabularies. The data modeling Working Group relies on the cataloguing expertise of three major cultural institutions: Radio France, BnF (French national Library), and Philharmonie de Paris.

FRBRoo is used as a starting point, for its flexibility and its separation of concerns between a (musical) Work and Event (interpretation). We have extended the model with classes and properties specific to musical data, and we have published a set of shared multilingual vocabularies. The result shall enable fine-grained descriptions of both traditional and classical music works as well as the numerous concerts that are being regularly held.

The aim of this tutorial is to provide in-depth explanations of those models and controlled vocabularies and to show different applications consuming this data, such as exploratory search and music recommendation applications.

 
 
Website:
 

 
Workshop: PROFILES'16: 3rd International Workshop on Dataset PROFIling and fEderated Search for Linked Data
Abstract:
While the Web of Data, and in particular Linked Data, has seen tremendous growth over the past years, take-up, usage and reuse of data is still limited and is often focused on well-known reference datasets. As the Linked Open Data (LOD) Cloud includes data from a variety of domains spread across hundreds of datasets containing billions of entities and facts and is constantly evolving, manual assessment of dataset features is not feasible or sustainable, leading to brief and often outdated dataset metadata. The PROFILES'16 workshop is a continuation of the workshop series successfully started as PROFILES'14 and PROFILES'15 @ ESWC 2014, ESWC 2015. These workshops aims to gather innovative query and search approaches for large-scale, distributed and heterogeneous linked datasets inline with dedicated approaches to analyse, describe and discover endpoints, as an inherent task of query distribution and dataset recommendation.
 
 
Website:
 

 

 

Workshop: Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web
Abstract:
As the Web rapidly evolves, people are becoming increasingly enthusiastic about interacting, sharing, and collaborating through social networks, online communities, blogs, wikis, and the like. In recent years, this collective intelligence has spread to many different areas, with particular focus on fields related to everyday life such as commerce, tourism, education, and health, causing the size of the social Web to expand exponentially.

To identify the emotions (e.g. sentiment polarity, sadness, happiness, anger, irony, sarcasm, etc.) and the modality (e.g. doubt, certainty, obligation, liability, desire, etc.) expressed in this continuously growing content is critical to enable the correct interpretation of the opinions expressed or reported about social events, political movements, company strategies, marketing campaigns, product preferences, etc.

This has raised growing interest both within the scientific community, by providing it with new research challenges, as well as in the business world, as applications such as marketing and financial prediction would gain remarkable benefits.

This workshop proposal is a follow-up of ESWC 2014 workshop on ``Semantic Web and Sentiment Analysis''.

Based on the lessons learnt from the first edition, this year the scope of the workshop is a bit broader (although still focusing on a very specific domain) and accepted submissions will include abstracts and position papers in addition to full papers. The workshop's main focus will be discussion rather than presentations, which are seen as seeds for boosting discussion topics, and an expected result will be a joint manifesto and a research roadmap that will provide the Semantic Web community with inspiring research challenges.

 
 
Website:
 
 
 
 

 
Workshop: 5th Workshop on Knowledge Discovery and Data Mining Meets Linked Open Data (Know@LOD)

"Know@LOD" and "CoDeS" have been joined.

 

 

Abstract:
This workshop will join two successful series of past events. It follows the first four editions of Know@LOD at ESWC 2012, 2013, 2104, and 2015, each of which were attended by 25 or more participants, respectively, with the last two editions being among the most frequently visited ESWC workshops overall, as well as the Data Mining on Linked Data (DMoLD) workshop, which was held at ECML/PKDD 2013 with around 40 participants. Besides a track for research papers, the workshop will host the fourth Linked Data Mining Challenge (the first having been at DMoLD, the second and third at Know@LOD). Drawing from the experience of three such challenges, we will prepare an interesting task and dataset, and announce it early to allow the development of high quality solutions. We will hand out prizes for both the best challenge entry and the best regular workshop paper.

We hope to continue these series in the context of ESWC 2016. Given the experience of the past years, Know@LOD 2016 is planned as a full-day workshop. The proceedings of the workshop will be published with CEUR-WS.

 
 
Website:
 

 

 

Workshop: International Workshop on Summarizing and Presenting Entities and Ontologies
Abstract:
The Open Data and Semantic Web efforts have been promoting and facilitating the publication and integration of data from diverse sources, giving rise to a large and increasing volume of machine­-readable data available on the Web. Even though such raw data and its ontological schema enable the interoperability among Web applications, problems arise when exposing them to human users as how to present such large­-scale structured data in a user­-friendly manner. To meet this challenge, we invite research contributions on all aspects of ranking, summarization, visualization, and exploration of entities, knowledge graphs, ontologies, and open data with particular focus on their summarization and presentation. We also welcome submissions on novel applications of these techniques. The workshop is expected to be a forum that brings together researchers and practitioners from both academia and industry in the areas of Semantic Web, information retrieval, data engineering, and human­-computer interaction to discuss high­-quality research and emerging applications to exchange ideas and experience, and to identify new opportunities for collaboration.
 
 
Website:
 

 

 

Workshop: LDQ: 3rd Workshop on Linked Data Quality
Abstract:
The focus of this workshop is to reveal novel methodologies and frameworks in assessing, monitoring, maintaining, and improving the quality of Linked Data as well as to highlight tools and user interfaces which can effectively assist in its assessment and repair. In addition, the workshop seeks methodologies that help to identify the current impediments in building real-world Linked Data applications leveraging data and ontology quality. The benefits of addressing Linked Data quality issues will not only help in detecting inherent data quality problems currently plaguing Linked Data, but also provide the means to fix these problems and maintain the quality in the long run.
 
 
Website:
 

 

 

Workshop: Workshop on Extraction and Processing of Rich Semantics from Medical Texts
Abstract:
Medical documents bear rich semantics such as facts, experiences, opinions or information which could – when extracted automatically - support a broad range of applications. Physicians could learn about the experiences of their colleagues, get hints to critical events in the treatment of a specific patient or receive information for improving treatment. However, language peculiarities, content diversity, streaming nature of clinical documents pose many challenges and the trade-off of filtering noise at the cost of losing information which is potentially relevant. This workshop is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. The aim of the workshop is to encourage researchers from the medical natural language processing (NLP) and knowledge management community to present novel issues and techniques related to the extraction and processing of rich semantics from medical texts, but more importantly to discuss current challenges and future steps towards new directions for gathering and processing rich semantics in the medical domain.
 
 
Website:
 
 

 

 

Workshop: Fourth Workshop on Linked Media (LiME-2016)
Abstract:
If the future Web will be able to fully use the scale and quality of online media, a Web scale layer of structured and semantic media annotation is needed, which we call Linked Media. Drawing on the success of the Linked Data movement, we believe annotation of media using Linked Data concepts can be the basis for Web-wide media interlinking based on concept matching and relationships.

This 4th workshop on Linked Media (LiME-2016), building on previous successful events held at WWW2013, ESWC2014 and WWW2015, aims at promoting the principles of Linked Media on the Web by gathering semantic multimedia and Linked Data researchers to exchange current research and development work on creating conceptual descriptions of media items, multimedia metadata publication on the Web, and its semantic processing, particular based on Linked Data approaches to concept matching and relationships. Specifically, we aim to build a research community to promote a future Web where automated multimedia analysis results can be used as a basis to integrate Linked Data-based conceptual annotations into structured media descriptions, which can be published and shared online. When media descriptions are more easily findable and processable, new applications and services can be created in which online media is more easily shared, retrieved and re-used.

 
 
Website:
 

 

 

Workshop: Semantic Web Technologies in Mobile and Pervasive Environments (SEMPER)
Abstract:
The workshop aims to explore emerging research topics of interest in applying Semantic Web technologies in the domains of Pervasive and Mobile Computing, Ambient Intelligence, Internet of Things and Interoperable Platforms. The main objective is to stimulate and foster active exchange, interaction and comparison of approaches on formal ontologies for the semantic enrichment, representation and linking of sensor data, events and resources, integration of sensors, mobile and wearable devices, IoT platform convergence and dissemination, context-aware and real-time discovery, reasoning, interpretation and composition of data sources for building high-level applications. The workshop will provide the participants with an opportunity to discuss specific research and technical topics in the aforementioned areas, sharing their results and practical development experiences in the fields.
 
 
Website: 
 
 

 

 

Workshop: SALAD – Services and Applications over Linked APIs and Data
Abstract:
The World Wide Web has significantly evolved during the past 25 years, developing from a collection of a few interlinked static pages to a global ubiquitous platform for sharing, searching, and browsing dynamic and customisable content, in a variety of different media formats. Future developments bring the promise of a higher level of automation, distributed search and the use of intelligent personal agents1, which autonomously perform tasks on behalf of the user. The foundation for these trends is laid by the ever growing number of users and web sites, the increasing data volumes, but also by the use and popularity of Linked Data and Web APIs. Unfortunately, despite some initial efforts and progress towards integrated use, these two technologies remain mostly disjunct in terms of developing solutions and applications. To this purpose, SALAD aims to explore the possibilities of facilitating a better fusion of Web APIs and Linked Data, thus enabling the harvesting and provisioning of data through applications and services on the Web.
 
 
Website:

 

 

 

Workshop: CoDeS 2016 - International Workshop on Completing and Debugging the Semantic Web

"Know@LOD" and "CoDeS" have been joined.

 

 

Abstract:
Developing ontologies and Semantic Web data sets is not an easy task and, as the ontologies and data sets grow in size, they are likely to show a number of defects (wrong information as well as omissions). Such ontologies and data sets, although often useful, also lead to problems when used in semantically-enabled applications. Wrong conclusions may be derived or valid conclusions may be missed. Further, during the recent years, more and more mappings both between ontologies as well as entities in the Linked Open Data cloud have been generated, e.g., using ontology alignment and/or entity linking systems, forming a linked network of data sets and ontologies. This has led to a new opportunity to deal with defects, as links between datasets and ontologies may be exploited for debugging, or the interlinks between them. On the other hand, it also has introduced a new difficulty as the mappings may not always be correct and need to be debugged themselves. Also the linked data level may contain wrong information and omissions in the data as well as the links.

This workshop intends to be a forum where issues related to completing and debugging the Semantic Web are discussed.

 
 
Website:
 

 

 

Workshop: Managing the Evolution and Preservation of the Data Web
Abstract:
This workshop targets one of the emerging and fundamental problems in the Semantic Web, specifically the preservation of evolving linked datasets. This topic is of particular relevance to ESWC since it raises awareness of the importance to research solutions for preserving and managing of dynamic linked datasets. Fostering active usage of such evolving datasets requires to address challenges such as synchronisation, change representation, storing and querying over evolving graphs.

Apart from researchers and practitioners, the target audience comprises data publishers and consumers. Publishers will benefit from attending this workshop by learning about ways and best practises to publish their evolving datasets. Consumers benefit by being able to discuss their expectations, requirements and current systems to handle and process changing dataset in efficient ways. In addition, this year’s workshop introduces an Open RDF Archive Challenge and also invites industries to submit use cases and practical presentations. We expect around 20 participants.

 
 
Website:
 

 

 

 

 

Workshop: 1st Workshop on Humanities in the SEmantic web (WHiSE 2016)
Abstract:
WHiSE is a full-day symposium aimed at strengthening communication between scholars in the Digital Humanities and Linked Data communities and discussing unthought-of opportunities arising from the research problems of the former. Its best-of-both-worlds format accommodates the practices of scholarly dialogue in both fields by inviting visions, real systems and debate.
 
 
Website:
 

 
Workshop: 2nd Int. Workshop on Semantic Web for Scientific Heritage (SW4SH)
Abstract:
The purpose of the workshop is to provide a forum for discussion about the methodological approaches to the specificity of semantically annotating “scientific” texts (in the wide sense of the term, including disciplines such as history, architecture, or rhetoric), and to support a collaborative reflection, on possible guidelines or specific models for building historical ontologies. A key goal of the workshop, focusing on research issues related to pre-modern scientific texts, is to emphasize the benefit of a multidisciplinary research to create and operate on relevantly structured data. The workshop organizers all belong to the Zoomathia international research network funded by the French National Scientific Research Center (CNRS). This network gathers French, Italian, German and English researchers and aims to study the formation and transmission of ancient zoological knowledge over a long period, with an historical, literary and epistemological approach, and create open knowledge sources on classical zoology to be published on the web of linked data. This workshop is an opportunity to present the activity of the network, to enlarge the network with interested members of the Semantic Web community and to benefit from the results of related research projects.
 
 
Website:
 

 

 

 

 

CANCELLED 

Workshop: SWSDI: 1st Workshop on Semantic Web for Federated Software Defined Infrastructures

Website: http://ivi.fnwi.uva.nl/sne/swsdi2016