
Since the so-called “deep learning revolution”, remarkable improvements have been made by RNNs, CNNs, GANs, and more – driving fields such as computer vision and natural language processing forward by leaps and bounds. We often encounter instances, though, where it’s the relationships between our data as well as the data itself that we need to consider fully, and sometimes Euclidean representations fall short.
Graph Neural Networks have become a hotter and hotter topic in recent years. Since 2014, approaching deep learning with graph-structured data has become less and less niche, and many improvements have been made in algorithms that make predicting on graph-structured data possible. However, even within the term “Graph Neural Networks”, there are a variety of vastly different approaches, and lots of hype. So, what can GNNs do for you?
In this discussion, we’ll focus on:
• Breaking down the terminology – what are Graph Neural Networks versus Graph
Convolutional Networks, and what are some other kinds of GNNs?
• Dissecting the intuition behind Graph Convolutional Networks – what makes them work?
• How to get started embedding and predicting on benchmark biological datasets with an
implementation of a Graph Convolutional Network.