ESWC 2016 Workshops and Tutorials
Tentative schedule for ESWC16 Workshops and Tutorials* (last updated: 2015-12-22):
MAY 29th MORNING |
MAY 29th AFTERNOON |
MAY 30th MORNING |
MAY 30th AFTERNOON |
---|---|---|---|
09:00-12:30 | 14:00-17:30 | 09:00-12:30 | 14:00-17:30 |
LDF & Dev Hackshop | |||
Instance Matching Benchmarks | |||
RichMedSem | |||
and |
|||
PROFILES | |||
SumPre | |||
SW4SH |
* Although we will do our best to keep this schedule, please notice that this is currently a draft and changes may be applied according to organizational needs.
Tutorial
Workshop
Workshop and Tutorial
- there is a proliferation of RDF systems, and identifying the strong and weak points of these systems is important to support users in deciding the system to use for their needs;
- surprisingly, there is a similar proliferation of RDF bechmarks, a development that adds to the confusion since it is not clear which benchmark(s) should one use (or trust) to evaluate existing, or new, systems.
Benchmarks can be used to inform users of the strengths and weaknesses of competing software products, but more importantly, they encourage the advancement of technology by providing both academia and industry with clear targets for performance and functionality. Given the multitude of usage scenarios of RDF systems, one can ask the following questions:
- how can one come up with the right benchmark that accurately captures all these use cases?
- How can a benchmark capture the fact that RDF data are used to represent the whole spectrum of data, from structured (with relational data converted to RDF), semi-structured (with XML data converted to RDF) and completely natively unstructured graph data?
- How can a benchmark capture the different data and query patterns and provide a consistent picture for system behavior across different application settings?
- When one benchmark does not suffice and multiple ones are needed, how can one pick the right set of benchmarks to try?
These are particularly hard questions whose answers require both an in-depth understanding of the domains where RDF is used, and an in-depth understanding of which benchmarks are appropriate for what domains. In this tutorial, we provide some guidance in this respect by discussing the state-of-the-art RDF, and if time permits, graph benchmarks.
Shape Expressions (ShEx) has been designed as an intuitive and human-friendly high level language for RDF validation.
In 2014, the W3c chartered a working group called RDF Data Shapes to produce a language for defining structural constraints on RDF graphs. The proposed technology has been called SHACL (Shapes Constraint Language) and a first public working draft has been published in October, 2015.
In this tutorial we will present both ShEx and SHACL using examples and RDF data modelling exercises.
Like the popular SPARQL by example tutorial, this tutorial includes step-by-step instructions with examples followed by exercises. Participants can download validation tools to use locally or use web-based interfaces like RDFShape or W3C ShEx Workbench.
LDF and ESWC DevWS have been joined
- representative predicate-argument extraction paradigms, namely Combinatory Categorical Grammar (CCG) and Dependency Grammar (DG), elaborating on corresponding implications regarding the expressiveness and completeness of the subsequently generated linked data;
- linguistic resources, the main focus being on FrameNet, as backbone for delineating meaning in the extracted predicates and their participating arguments;
- presentation of state-of-the-art frame detection systems in order to illustrate and foster understanding of the strengths and weaknesses incurred by the different predicate-arguments paradigms;
- vocabularies and models for generating Linked Data and ontologies.
This tutorial has a goal to present and discuss several complementary techniques, developed in frame of the SemData project, which are important to help solve the challenges of improving quality and scalability of Semantic Data Management, giving an account to distributed nature, dynamics and evolution of the data sets. In particular, the objectives are:
- Refined temporal representation: to present the developments regarding the required temporal features which help cope with dataset dynamics and evolution.
- Ontology learning and knowledge extraction: to demonstrate the methodology allowing robustly extracting a consensual set of community requirements from the relevant professional document corpus; to refine the ontology and evaluate the quality of this refinement.
- Distribution, autonomy, consistency and trust: to present the approach to implement a Read/Write Linked Open Data coping with participants autonomy and trust at scale.
- Entity resolution: to discuss how traditional entity resolution workflows have to be revisited in order to cope with the new challenges stemming from the Web openness and heterogeneity, data variety, complexity, and scale.
- Large scale reasoning: overview the variety of existing platforms and techniques that allow parallel and scalable data processing, enabling largescale reasoning based on rule and data partitioning over various logics.
This tutorial will focus on the role of user data, knowledge bases and reasoning techniques User Modeling. UM methods aim to create digital representations of users, and then adapt the interface or the content based on these models.
The tutorial will first provide an overview of the research in the area, presenting the evolution of User Modeling methods over the years, focusing on the role of data and methods from the social and semantic Web in the creation and enrichment of user models. Examples from most relevant state-of-art work will be presented.
Then, we will describe methods and techniques to be used in the User Modeling process to obtain user data, to enrich user data with person-related information as well as data from the Linked Open Data cloud, and to reason about these models. We will give a brief overview on methods for representing user models, how the models are used for providing recommendations and for evaluating these models.
Finally, a practical exercise on user model enrichment is presented.
DOREMUS is a research project that aims to develop tools and methods to describe, publish, interconnect and contextualize music catalogues on the web using semantic web technologies. Its primary objective is to provide common knowledge models and shared multilingual controlled vocabularies. The data modeling Working Group relies on the cataloguing expertise of three major cultural institutions: Radio France, BnF (French national Library), and Philharmonie de Paris.
FRBRoo is used as a starting point, for its flexibility and its separation of concerns between a (musical) Work and Event (interpretation). We have extended the model with classes and properties specific to musical data, and we have published a set of shared multilingual vocabularies. The result shall enable fine-grained descriptions of both traditional and classical music works as well as the numerous concerts that are being regularly held.
The aim of this tutorial is to provide in-depth explanations of those models and controlled vocabularies and to show different applications consuming this data, such as exploratory search and music recommendation applications.
To identify the emotions (e.g. sentiment polarity, sadness, happiness, anger, irony, sarcasm, etc.) and the modality (e.g. doubt, certainty, obligation, liability, desire, etc.) expressed in this continuously growing content is critical to enable the correct interpretation of the opinions expressed or reported about social events, political movements, company strategies, marketing campaigns, product preferences, etc.
This has raised growing interest both within the scientific community, by providing it with new research challenges, as well as in the business world, as applications such as marketing and financial prediction would gain remarkable benefits.
This workshop proposal is a follow-up of ESWC 2014 workshop on ``Semantic Web and Sentiment Analysis''.
Based on the lessons learnt from the first edition, this year the scope of the workshop is a bit broader (although still focusing on a very specific domain) and accepted submissions will include abstracts and position papers in addition to full papers. The workshop's main focus will be discussion rather than presentations, which are seen as seeds for boosting discussion topics, and an expected result will be a joint manifesto and a research roadmap that will provide the Semantic Web community with inspiring research challenges.
"Know@LOD" and "CoDeS" have been joined.
We hope to continue these series in the context of ESWC 2016. Given the experience of the past years, Know@LOD 2016 is planned as a full-day workshop. The proceedings of the workshop will be published with CEUR-WS.
This 4th workshop on Linked Media (LiME-2016), building on previous successful events held at WWW2013, ESWC2014 and WWW2015, aims at promoting the principles of Linked Media on the Web by gathering semantic multimedia and Linked Data researchers to exchange current research and development work on creating conceptual descriptions of media items, multimedia metadata publication on the Web, and its semantic processing, particular based on Linked Data approaches to concept matching and relationships. Specifically, we aim to build a research community to promote a future Web where automated multimedia analysis results can be used as a basis to integrate Linked Data-based conceptual annotations into structured media descriptions, which can be published and shared online. When media descriptions are more easily findable and processable, new applications and services can be created in which online media is more easily shared, retrieved and re-used.
"Know@LOD" and "CoDeS" have been joined.
This workshop intends to be a forum where issues related to completing and debugging the Semantic Web are discussed.
Apart from researchers and practitioners, the target audience comprises data publishers and consumers. Publishers will benefit from attending this workshop by learning about ways and best practises to publish their evolving datasets. Consumers benefit by being able to discuss their expectations, requirements and current systems to handle and process changing dataset in efficient ways. In addition, this year’s workshop introduces an Open RDF Archive Challenge and also invites industries to submit use cases and practical presentations. We expect around 20 participants.
CANCELLED
Workshop: SWSDI: 1st Workshop on Semantic Web for Federated Software Defined Infrastructures
Website: http://ivi.fnwi.uva.nl/sne/swsdi2016