Can Alexa keep your secrets? Your smart home personal assistant may be vulnerable to hackers
While Amazon’s Alexa can help you manage your calendar, play your favorite music, and order your takeout, use of the virtual assistant may also place your personal data at risk, according to research by a team of cybersecurity students in the Johns Hopkins University Information Security Institute’s Master of Science in Security Informatics (MSSI) degree program. The group’s project, “Privacy and Security in Skills in Smart Home Personal Assistants,” shed light on such vulnerabilities.
“In this project, we first defined privacy concerns in skills that are most common across different platforms of Smart Home Personal Assistants and conducted experiments regarding each privacy concern. The first concern is the under-controlled third-party server that may bring malicious skills into the market. The second concern is loose market vetting and over-trust in privacy policies,” said Xiangjun Ma, who graduated with his MSSI degree in December 2020.
Skills are voice-driven capabilities in a Smart Home Personal Assistant (SPA)’s application. Users can add skills to a SPA to bring products and services to life; for instance, users can ask their SPA to order an item on Amazon. Users can view available skills and enable or disable them using the smart home assistant’s application. Unfortunately, hackers can simulate a skill and make it appear to have been designed by the application’s programmer, when it actually is malicious software. In that case, the surreptitiously inserted malicious software gains access to the user’s private information, like credit card numbers or home addresses.
The team investigated how Google Home and Amazon Alexa are checking the market’s newly developed skill to protect users against such malicious software. They developed two potentially malicious skills and published them in both the Google Home market and Amazon Alexa store. The skills were successfully published and several potential risks were observed during Google Home market and Amazon Alexa store’s vetting process. The team’s findings also concluded that it is possible that more malicious skills passed the check and leaked into the market.
“We provided possible mitigations to these observations and formed our suggestions to further secure the skills market. The goal of our work is to provide users a more safe and secure environment when using such third-party skills,” said Ma.
To safeguard a SPA, users can check the skills within the settings of the application and report any suspicious activity (i.e. word searches that appear in the cache of the device that was not authorized by the user) to the application’s programmer.
Ma, along with MSSI program students Zichen Wang and Haotian An, presented their research for their Capstone Project. Their faculty mentor for the Capstone Project is Yinzhi Cao, assistant professor in computer science at the Johns Hopkins University.