Trip Report: Semantic Web in Libraries. Hamburg 25th-27th November 2013

From MPDLMediaWiki
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The 5th annual meeting took place Hamburg, Germany from 25th-27th November 2013. The meeting has been attended by Irina Arndt, Natasa Bulatovic and Mary-Ann Ritter from the MPDL.

Description: The SWIB conference aims to provide substantial information on LOD (Linked Open Data) developments relevant to the library world and to foster the exchange of ideas and experiences among practitioners. SWIB encourages thinking outside the box by involving participants and speakers from other domains, such as scholarly communications, museums and archives, or related industries.

As in the years before, SWIB13 will be organized by the North Rhine-Westphalian Library Service Centre (hbz) and the ZBW - German National Library of Economics / Leibniz Information Centre for Economics. The conference language is English




Exkurs At the SWIB13 conference in Hamburg, November 25-27, 2013, a pre-conference VivoCamp will be organised, focused on exchanging experiences with and information about Vivo, an open source tool based on linked open data concepts for connecting research information within and across institutions.

The objectives of the meeting will be achieved through discussion, demos and hands-on activities.

Monday, 25th November 2013[edit]


  • started 2003
  • researchers from life sciences
  • initial idea: research network to get in touch with each other
  • 2009: project proposal and funding, goal: national network of scientists
  • add user profiles, self editing, some data can be locked
  • is a content disseminator
  • one can add ontologies
  • 2010 1.0 as open source
  • currently 1.5.1

  • triple store into a solr instance
  • searching takes place in solr- not in the triple store

  • different data sources, harvests much data automatically -> via vivo harvester
  • written in java, using java apis
  • fetch from a variety of formats

technical: how to get vivo up and running[edit]

  • Eindruck: Vivo ist Hardware intensiv, Wunsch nach Sandbox-Version zum Testen
  • Inkonsistenzen im Installation Guide
  • technischer Support über Developer List: [1] (auch für Nicht-Entwickler)

data ingestion[edit]

  • vitro (=vivo without ontology) download (sourceforge, git)
  • after installation almost no data (persons, institutions etc.)
  • pipeline of tools, using APIs: fetch / parse / transform (maps rdf to "vivo rdf")
  • triple store = mysql database
  • fetching: fetches data from a url, database, local file (csv fetcher, jdbc fetcher, simpleXMLfetcher (for ingesting from rss feeds), JSON fetcher) -> output intermediate rdf format, one file per record
  • uses xslt transform
  • unique id for each record
  • node-person: organization becomes foaf organization
  • Transfer
  • scoring / match (Dublettenvermeidung)
  • data ingest alternatives to vivo harvester: Project called "karma" (provides a gui), google refine (has a vivo rdf plugin), vivo admin tools themselves can load rdf, too
  • exposing data: vivo web pages, view data as rdf, query a sparql endpoint, drupal front end


Felix Ostrowski / Adrian Pohl - Introduction to Linked Open Data[edit]

Participation: Irina, Natasa, Mary-Ann

This workshop gave an excellent overview about the fundamentals of Linked Open Data technologies. The basic terms like URI, RDF, Triple Stors and SPARQL would be explained by own examples and could be discuss in small groups.


Tuesday, 26th November 2013[edit]

Klaus Tochtermann / Silke Schomburg, Opening[edit]

ZBW Leibniz Information Centre for Economics, Germany / North Rhine-Westphalian Library Service Center (hbz), Germany

Dorothea Salo, Soylent SemWeb Is People! Bringing People to Linked Data[edit]

University of Wisconsin-Madison, United States of America

Mappings and Mashups[edit]

Magnus Pfeffer, Automatic Creation of Mappings between Classification Systems for Bibliographic Data[edit]

Stuttgart Media University, Germany


  • Datenbestände mit Mehrfach-Klassifizierung für Mapping zwischen Klassfizierungssystemen verwenden
  • Paare werden gebildet und nach einer Formel eine proportionale Übereinstimmung für zwei Klassen bestimmt
  • Evaluierung der automatischen Mapping-verfahren durch Vergleich mit intellektuell erstellten existierenden Mappings für(Teilbereiche) der Klassifikationssysteme
  • 'Basic Classification' sehr sinnvoll für facettierte Suche
  • DDC has been published as lod; not RVK (Regensburger Verbundklassifikation) or BC (Basic Classification)
  • ontology: skos
  • Problem: rdf relations can not be qualified (no standard yet)

Nadine Steinmetz, Cross-Lingual Semantic Mapping of Authority Files[edit]

Hasso Plattner Institute, Germany

  • Vortragsvideo: [2] (17 min)

Philipp Zumstein, Mash-up for Book Purchasing[edit]

Mannheim University Library, Germany


Libraries and Beyond[edit]

Valeria Pesce, AgriVIVO: A Global Ontology-Driven RDF Store Based on a Distributed Architecture[edit]

Global Forum on Agricultural Research (GFAR) / Food and Agriculture Organization of the United Nations (FAO) / Cornell University, United States of America


Nikolas Mitrou, HEAL-Link Activities and Plans on Annotating, Organizing and Linking Academic Content[edit]

National Technical University of Athens, Greece


Richard Wallis, Linked Data for Libraries: Great Progress, but What Is the Benefit?[edit]

OCLC, United Kingdom


Ontology Engineering[edit]

Lars G. Svensson, BIBFRAME: Libraries Can Lead Linked Data[edit]

DNB, Leipzig / Frankfurt am Main

  • aims: format that allows easier data reuse = increase library visibility
  • a transporting format
  • requirements: (free), model agnostic, support rda (and other rules), extensible (for new materials)
  • model: work / instance (frbr: group 2/ group 3)/ authority / annotation
  • interoperability is essential (e.g. with frbr data model) via community profiles
  • dnb: experimental implementation of bibframe in DNB catalog "bibframe repräsentation dieses Datensatzes" (not functional yet, tested on Dec 19), instance data is mainly based on strings not on uris, in the data for the work more uris are used (e.g. gnd links for persons/subjects)
  • "up and run" ~ 1 year
  • work to do: vocabulary must be enriched, maybe enlarge the model (more main entities?)
  • currently no holdins yet in the data model ?!? "use "offer" in instead"
  • dnb is still exploring

Matias Mikael Frosterus, Building a National Ontology Infrastructure[edit]

The National Library of Finland, Finland


Carsten Klee / Jakob Voß, On the Way to a Holding Ontology[edit]

Berlin State Library, Germany / GBV Common Library Network, Germany


  • 5 holding related use cases exist, e.g. 'closest copy service' / search engines get better knowledge about local holdings
  • preparation: analizing existing standards / vocabularies to describe holdings
  • contribute to the data model via Mailing list ( )
  • Concepts: reuse existing ontologies / create micro-ontologies (e.g. dso:DocumentService for lending, copying etc.), * ServiceEvent = a service that is also an event (z.B. eine Ausleihe von Person x von Exemplar y zum Zeitpunkt z) / couple of holdings properties
  • current specification: github/dini-ag-kim/holding-ontology

Wednesday, 27th November 2013[edit]

Martin Malmsten, Decentralisation, Distribution, Disintegration - towards Linked Data as a First Class Citizen in Libraryland[edit]

National Library of Sweden
Presentation: min)

  • we didn't want to publish lod, we wanted to use other lod data pools
  • wir können jede datenquelle einbinden, so lange sie einen sparql-endpoint hat und uns Änderungen mitteilt (z.B. als feed (man muss die Daten nicht vorhalten, man muss nur "reagieren" können, z.B. Index updaten)
  • bevorzugtes Format: json-ld (einfacher zu verstehen als rdf)
  • Formate sind für den Datenaustausch gedacht und sollten außerhalb der Anwendung bleiben- "inside: wonderful linked data, for exchange we use marc, if we have to"
  • Benutzeroberflächen sind essentiell, um lod auszubauen ('managers don't understand rdf')
  • what we did: build an interface on datasets from differen sources (lod + feed + sparql), 1 week workshop: libris data, viaf, dbpedia -> e.g. create profiles of persons
  • when actually using it, you find out, if it actually works
  • question of trust to harvest other sources or just to link (still technical issues, e.g. caching)

Agnès Simon, The "OpenCat" Prototype: Linking Public Libraries to National Datasets[edit]

Bibliothèque nationale de France, France


  • prototype
  • a tool allowing local libraries to take adavantage of semWebTechnologies
  • reuse frbrized data der national library (data available in rdf and json)
  • mix it with other data
  • data from several sources -> put together in free software cubicweb (relational database based from python) => opencat
  • gather different sources together
  • []
  • release 2012, user feedback: frbrized was appreciated, external info good, biographic data good, 'navigation is quite natural'
  • next steps: oberfläche verbessern, mehr bibliotheken mit oberfläche ausstatten

Contributing to Europeana[edit]

Péter Király, Semantic Web Technology in Europeana[edit]

Europeana Foundation

Kai Eckert, Specialising the EDM for Digitised Manuscripts[edit]

Universität Mannheim, Germany


Jose Manuel Barrueco Cruz , Application of LOD to Enrich the Collection of Digitized Medieval Manuscripts at the University of Valencia[edit]

University of Valencia, Spain / University of Valencia, Spain


Base Technology: The Web[edit]

Simeon Warner, ResourceSync for Semantic Web Data Copying and Synchronization[edit]

Cornell University Library, United States of America


Fabian Steeg / Pascal Christoph, From Strings to Things: A Linked Open Data API for Library Hackers and Web Developers[edit]

North Rhine-Westphalian Library Service Center (hbz), Germany


Stephen Davison. Enhancing an OAI-PMH Service Using Linked Data: A Report from the Sheet Music Consortium[edit]

University of California, Los Angeles, United States of America


Vitali Peil, Exposing Institutional Repositories as Linked Data - a Case-Study[edit]

Bielefeld University Library, Germany

Ligthing Talks[edit]

  • Lukas Koster
  • Carsten Klee
    • easily convert MARC data to RDF- PHP tool - Alternative zu katmandu
  • Jens Mittelbach
  • Leander Seige/ Natanael Arndt
    • Managing electronic resource using linked data [5]
  • Adrian Pohl
    • The "Libraries Empowerment Manifesto" [7]
  • Timo Borst
    • The EU-project EEXCESS – “Enhancing Europe’s eXchange in Cultural Educational and Scientific Resources” is started this year 2013. The project uses a completely novel approach to information dissemination in order to link web content, such as images, videos, infographics, statistics or texts from social media channels and blogs, with cultural, educational and scientific content in a personalised and contextualised manner.
  • Jan Schnasse
    • edoweb repositoy at the hbz moved from digitool to an new open source plattform som called "regal", which is a Fedora based repository