In this notebook we first briefly explain the data model of the underlying Reactome data set. Secondly, we explain the data model that we have built into Reactome Pengine. We then explain where computations take place. Next we give some example queries that illustrate the capabilities of Reactome Pengine in the context of a SWISH notebook, including graphical rendering, R integration and Javascript applications. Also note that further functionality of the Pengine Reactome can be utilised by building client applications in the full desktop version of SWI-Prolog. The examples of functionality in this notebook are a fraction of the possible ways to use Reactome Pengine, intended to illustrate calls to the API.
The underlying data model is based on a RDF triple graph. This is fully documented on the Reactome site: Reactome Data Model. In brief, the principle type of entity in Reactome is the reaction and reactions have inputs and output entities. Reactions are also optionally controlled. Each biological entity such as proteins, small molecules, and reactions are given an ID. Entities also include 'Complexes' and 'Protein sets'. A complex is a set of molecular entities that have combined together. A protein set is when a set of proteins can perform the same biological function. Both complexes and protein sets can also themselves be composed of complexes and protein sets. We can query this data directly with the rdf/3 predicate. To see how this is done click the blue play triangle in the cell below. You can then find further solutions, with 'next'.
The semantic web library of prolog has other access predicates to query this data, for example rdf_has/3. For more details see the documentation here.
In addition to the direct method of querying the RDF file using the Prolog semantic web library, a number of higher level predicates built on top of this library are also provided. Often these higher level predicates give a more intuitive and compact view of the data. The full documentation for the API is here.
Our model states that reactions are nodes on a graph. There are edges between these nodes in two cases:
Generate pathways and a list of links in the pathway using:
Generate pathways that have an activation link where the activating entity is 'Complex452' using:
In order to convert between the Reactome identifiers and the common names of entities you can use the rid_name/2 predicate.
You may also want to convert Reactome protein identifiers to Uniprot identifiers, and for this you can use ridProtein_uniprotId/2.
Both of these predicates are full relations so a user can query in both directions and use the predicate to generate examples or to check that a pair are related.
The following two examples retrieve the name and Uniprot ID for the Reactome IDs 'Complex452' and 'Protein1', respectively.
The Sonic the HedgeHog (SHH) protein is well known. If we want to query Reactome Pengine to find what Protein complex it takes part in we can use the following query.
In Prolog a query is a call to a predicate (or as we have seen a conjunction of predicates). As with any Prolog program we can reuse a query and assign it a name by writing a new predicate definition containing the query. This is useful for complex queries as we can decompose the complexity into small parts.
A useful feature of Prolog is that if predicate definitions are restricted to the subset of pure Prolog, then it is possible to reason about the solutions to queries logically. This is useful for debugging and applying certain advanced machine learning algorithms for example Inductive Logic Programming.
In the following program we name the previous query as pathway_with_complex452_activation/1. This predicate is true if P is instantiated with a pathway that has an activation edge where the activation entity is complex452. We also define a predicate small_pathway/1 for pathways with less than 35 edges. We then define a new predicate, reusing pathway_with_complex452_activation/1 and small_pathway/1 to define small_pathway_with_complex452_activation/1. As this new predicate is made from the composition of pathway_with_complex452_activation/1 and small_pathway/1, we can logically reason that the new predicate will be true for the intersection of the answers to the original two predicates.
Running these three queries allows you to see that the answers to the third query is the intersection of the answer to the first two queries as we expected.
As we are seeing, using Prolog allows us to specify programs and algorithms, not just queries (in contrast to SQL, Cypher, REST APIs and SPARQL).
When using SWISH our programs are either executed on Reactome Pengine or on the SWISH server (itself a pengine application). When using the desktop version of SWI Prolog, programs are either executed on Reactome Pengine or your local machine. The idea is that small programs can be brought to the large data, rather than needing to transfer large datasets to a users machine. This, alongside the ability to send programs to the pengine, makes for an extremely flexible logical API. Both SWISH and Reactome Pengine have a time limit on queries. So far we have made simple queries of Reactome Pengine, and executed our programs in SWISH. In order to reduce the data transfer, and to send a program to the Pengine Reactome we use the 3rd argument of pengine_rpc/3.
Here we write a program to find a path on the graph of reactions across the whole Reactome. The predicate path_program/1 returns the program we wish to run, as a list of clauses. The predicate path_from_to/3 retrieves the server address and program and sends this, along with the query, to Reactome Pengine. The identified path is then returned.
To perform the same query without using Reactome Pengine the entire database would need to be downloaded.
The following query performs a breadth first search to find the shortest paths (click next to see successive results):
In Prolog we can describe lists declaratively, so for instance, we can write a definite clause grammar (DCG) for paths. These have a slightly different syntax to standard Prolog predicates, but can also be sent to The Pengine Reactome. For simplicity of presentation the DCG in the example below runs on the SWISH server. This example finds paths that pass through a reaction and that satisfy a user defined rule. In this case we simply ask for a path that passes through a reaction that has CTDP1 (Protein11301) as an input. Additionally we add 'Path=[_,_|_].' to the query. This serves as a constraint to find paths with at least two steps.
So far, in this tutorial, we have been representing the reaction graph as lists and terms. While this is useful for computations, for conveying results to human users it is often better to use graphical representations. To this end, in SWISH we can use The Graphviz renderer. Additionally, to make the graph visualisation more meaningful, we map the Reactome IDs to the reaction names.
We can chart data from Reactome using C3, a Javascript visualisation library. The following code makes a simple plot showing the number of reactions in a set of pathways.
Scraping the traditional HTML web for data makes it possible to combine many existing remote data sites with Reactome Pengine. Web scraping inside SWISH is currently limited to using the predicate load_html/3, whereas if you build a client application with the full version of SWI Prolog then predicates such as http_open/3, are available. After retrieving the HTML using either of these methods, we can then utilise libraries sgml and xpath to manipulate the data.
In the example below we scrape gene expression data from the GEO website for sample GSM38051. We also demonstrate how results can be displayed with the SWISH table renderer.
We can now integrate our webscraping and Reactome Pengine in a single query. Here we find the expression values for the probes for Protein11042.
The R programming language is built into SWISH, which means that we can perform statistical analysis using familiar tools. For full details see here. In the example below we query the number of edges in a set of pathways and the number of reactions in the same set. We then use R to calculate the correlation, fit a line and plot these using R's qplot function.
Run this query to see the correlation:
Run this query to plot the data with a fitted line:
SWISH allows the inclusion of Javascript code, which means we can use libraries such as d3. Therefore, we can make interactive charts and web applications with Reactome data inside a SWISH notebook. To see the Javascript code double click this text and scroll down to the 'script' tags.
We illustrate this functionality by combining Pengine Reactome and webscraping to build a simple application to show Hive Plots.
In Hive Plots the geometric placement of nodes on a graph has meaning, based on user defined rules. As we are using Prolog, we can build these rules easily. In the example below we visualise two features of reactions. These features are mapped to the geometric placement of nodes in the graph. The first feature is based on the network properties (the degree) of the reaction nodes. A reaction is assigned to one of three categories:
This first feature is illustrated by placing nodes on one of three axes: Vertical axis for category 1, 4 o'clock axis for category 2, and 8 o'clock axis for category 3.
For the second feature, we use the data of gene expression that we have scraped from the GEO website and perform the following steps:
To see the visualisation select a pathway from the drop down menu and click show pathway. The visualisation enables comparison of the graph properties and expression levels of different pathways.
To see the term that is used by the Javascript to make the hive plot run the following query.
Reactome Pengine is a web-logic API to query the human reactome. It provides both raw RDF data access and a set of built-in predicates to facilitate this. The pengine technology allows users to send entire programs to the API to augment the built in data and predicates. It can do this from a local client program, or as shown here, in a cloud hosted SWISH notebook. Programs hosted on SWISH notebooks allow for easy sharing and the ability to render query solutions graphically. In contrast, programs running in SWI-Prolog client applications have access to the full power of Prolog — including system calls and they also maintain privacy of local computations. Either of these options allow researchers to perform analysis of data that requires querying the human reactome and integrating with other data sources. In fact it is also possible to build a web application that interfaces with Reactome Pengine, for instance, a GUI web-based tool that makes predictions using Reactome data.