Lauren Hinkel | MIT-IBM Watson AI Lab
November 29, 2022
Graphs, a potentially extensive web of nodes connected by edges, can be used to express and interrogate relationships between data, like social connections, financial transactions, traffic, energy grids, and molecular interactions. As researchers collect more data and build out these graphical pictures, researchers will need faster and more efficient methods, as well as more computational power, to conduct deep learning on them, in the way of graph neural networks (GNN).
Now, a new method, called SALIENT (SAmpling, sLIcing, and data movemeNT), developed by researchers at MIT and IBM Research, improves the training and inference performance by addressing three key bottlenecks in computation. This dramatically cuts down on the runtime of GNNs on large datasets, which, for example, contain on the scale of 100 million nodes and 1 billion edges. Further, the team found that the technique scales well when computational power is added from one to 16 graphical processing units (GPUs). The work was presented at the Fifth Conference on Machine Learning and Systems.
Complete article from MIT News.
Explore
New Method Could Increase LLM Training Efficiency
Adam Zewe | MIT News
By leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.
MIT Engineers Design Structures that Compute with Heat
Adam Zewe | MIT News
By leveraging excess heat instead of electricity, microscopic silicon structures could enable more energy-efficient thermal sensing and signal processing.
Efficient cooling method could enable chip-based trapped-ion quantum computers
Adam Zewe | MIT News
New technique could improve the scalability of trapped-ion quantum computers, an essential step toward making them practically useful.




