Lauren Hinkel | MIT-IBM Watson AI Lab
November 29, 2022
Graphs, a potentially extensive web of nodes connected by edges, can be used to express and interrogate relationships between data, like social connections, financial transactions, traffic, energy grids, and molecular interactions. As researchers collect more data and build out these graphical pictures, researchers will need faster and more efficient methods, as well as more computational power, to conduct deep learning on them, in the way of graph neural networks (GNN).
Now, a new method, called SALIENT (SAmpling, sLIcing, and data movemeNT), developed by researchers at MIT and IBM Research, improves the training and inference performance by addressing three key bottlenecks in computation. This dramatically cuts down on the runtime of GNNs on large datasets, which, for example, contain on the scale of 100 million nodes and 1 billion edges. Further, the team found that the technique scales well when computational power is added from one to 16 graphical processing units (GPUs). The work was presented at the Fifth Conference on Machine Learning and Systems.
Complete article from MIT News.
Explore
MIDDAS: Memory Integration and Data Dis-Aggregation
Wednesday, November 19, 2025 | 12:00 - 1:00pm ET
Hybrid
Zoom & MIT Campus
New Tool Makes Generative AI Models More Likely to Create Breakthrough Materials
Zach Winn | MIT News
With SCIGEN, researchers can steer AI models to create materials with exotic properties for applications like quantum computing.
Closing in on Superconducting Semiconductors
Julianna Mullen | Plasma Science and Fusion Center
Plasma Science and Fusion Center researchers created a superconducting circuit that could one day replace semiconductor components in quantum and high-performance computing systems.




