TransformerGO: modelling the attention between groups of gene ontology

             Protein–protein interactions (PPIs) are important in a variety of biological processes, although only a small portion of them have been experimentally identified. Furthermore, high-throughput experimental approaches for detecting PPIs are known to have flaws, such as significant false positive and negative rates. One of the most powerful indications for protein interactions is semantic similarity derived from the Gene Ontology (GO) annotation. While computational methods for predicting PPIs have grown in popularity in recent years, most of them fail to capture the specificity of GO keywords.

               Recently, Leremie et. al.(2022)have proposed TransformerGO, a model that is capable of capturing the semantic similarity between GO sets dynamically using an attention mechanism. They have generated dense graph embeddings for GO terms using an algorithmic framework for learning continuous representations of nodes in networks called node2vec. TransformerGO learns deep semantic relations between annotated terms and can distinguish between negative and positive interactions with high accuracy. TransformerGO outperforms classic semantic similarity measures on gold standard PPI datasets and state-of-the-art machine-learning-based approaches on large datasets from Saccharomyces cerevisiae and Homo sapiens. They have shown how the neural attention mechanism embedded in the transformer architecture detects relevant functional terms when predicting interactions.

     The source code is available at https://github.com/Ieremie/TransformerGO.

Reference:

Leremie L. et. al.(2022) TransformerGO: predicting protein–protein interactions by modelling the attention between sets of gene ontology terms.  Bioinformatics 38(8): 2269-2277.


0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x