The goal of this session is to present a Blue Brain team effort to implement a RDF and SHACL based Knowledge Graph but more importantly to present and discuss the challenges faced and lessons learned when building a large scale Knowledge Graph for a data-driven research project like Blue Brain Project.
What would you like to talk about
Digital data deluge, cross-disciplinary and multi-modal scientific investigations along with tremendous available computing power has led to team based, data-driven and data-intensive methods in science. This come with a set of challenges for collecting, integrating, accessing, using, reusing and preserving heterogenous datasets generated from different contexts. As a data-driven project aiming to build biologically detailed digital reconstructions and simulations of the rodent, and ultimately the human brain, the Swiss brain research initiative Blue Brain Project faces also those challenges. To address them we built and open sourced Blue Brain Nexus, a RDF-based Knowledge Graph platform uniquely combining a flexible graph database, a powerful search engine and a scalable data store under a unified declarative and W3C SHACL model driven REST API to enable:
- Easy unification and integration of fragmented and disparate data from heterogeneous domains to break data and metadata silos leveraging semantic web formats and languages expressiveness as well as RDF flexibility and schema less nature
- Specification of best practices for data collection, storage and description through high quality metadata using the recent W3C SHACL specification
- Data lineage and provenance recording and description
- FAIR (Findable, Accessible, Interoperable, Re-usable) data and metadata management
We learned a lot along the way ! From building a production ready Knowledge Graph platform to
on-boarding and introducing managers, developers and scientists to Semantic Web technologies specially the “painfully simplistic” RDF with its developer friendly (is it ?  ) serialisation format JSON-LD as well as the “20 years late W3C SHACL specification”  for validating RDF graph.
Also, RDF and the W3C SHACL specification raise technical and usage challenges both for users and developers when put in production systems as the recent W3C Workshop on Web Standardisation for Graph Data shows.
The goal of this session is to present a Blue Brain team effort to implement a RDF and SHACL based Knowledge Graph but more importantly to present and discuss the challenges faced and lessons learned when building a large scale Knowledge Graph for a data-driven research project like Blue Brain Project. The goal is also for me to share with my colleagues knowledge engineers and Semantic Web practitioners what is it like to be one of the very few knowledge engineers in an entire organisation and how better communicate and provide incentives for adoption of Semantic Web technologies.
What would you like to know from other participants, what feedback are you looking for
This session is intended for: Developers, Data Engineers and scientists, Knowledge Engineers, managers, Scientists,…
I would like to have feedbacks on the approach taken to combine RDF and SHACL in a declarative and model driven REST API for data integration. Also on the overall technical choices and on the Semantic Web technologies used. I would like to hear not only about other potential use cases that could benefit from RDF and SHACL but also about solutions and best practices to better communicate about Semantic Web technologies to increase their adoption.
Anyone who wants to know about how to build a knowledge graph.