April 23, 2024
Health-monitoring apps can help people manage chronic diseases or stay on track with fitness goals, using nothing more than a smartphone. However, these apps can be slow and energy-inefficient because the vast machine-learning models that power them must be shuttled between a smartphone and a central memory server.
Engineers often speed things up using hardware that reduces the need to move so much data back and forth. While these machine-learning accelerators can streamline computation, they are susceptible to attackers who can steal secret information.
To reduce this vulnerability, researchers from MIT and the MIT-IBM Watson AI Lab created a machine-learning accelerator that is resistant to the two most common types of attacks. Their chip can keep a user’s health records, financial information, or other sensitive data private while still enabling huge AI models to run efficiently on devices.
The team developed several optimizations that enable strong security while only slightly slowing the device. Moreover, the added security does not impact the accuracy of computations. This machine-learning accelerator could be particularly beneficial for demanding AI applications like augmented and virtual reality or autonomous driving.
Complete article from MIT News.
Explore
New Method Efficiently Safeguards Sensitive AI Training Data
Adam Zewe | MIT News
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
Security Scheme Could Protect Sensitive Data during Cloud Computation
Adam Zewe | MIT News
MIT researchers crafted a new approach that could allow anyone to run operations on encrypted data without decrypting it first.
To Keep Hardware Safe, Cut Out the Code’s Clues
Alex Shipps | MIT CSAIL
New “Oreo” method from MIT CSAIL researchers removes footprints that reveal where code is stored before a hacker can see them.