The revision and postponement of a Microsoft AI tool over privacy concerns illustrates how software developers and users must be wary of experimental new features in emerging technologies that can actually be exploited as a bug, resulting in unintended consequences.
Following widespread criticism from cybersecurity experts, Microsoft in June announced important updates involving its Recall tool that was to come enabled as a preview experience with the company’s new Windows Copilot+ laptops that hit the shelves on June 18. This feature, which Microsoft unveiled in May, is designed to take screenshots (aka “snapshots”) of whatever users see and do on their computers and then catalog them. Microsoft said this will provide users with a “photographic memory” of sorts that will help them find old images, emails, content and other assets across their various applications by entering a prompt and scrolling through a personal timeline.
The problem, according to security researchers and thought leaders, is that Recall effectively operates like a built-in keylogger or spyware tool that, if abused by the wrong party, could be used for cybercriminal purposes. Moreover, the tool was originally set to be on by default – requiring users to voluntarily opt out in order to disable its functionality. This generated concern that many users would unknowingly have the program running in the background, without their explicit consent.
In its original announcement, Microsoft noted that Recall would come with certain privacy safeguards built in; for instance, it claimed that Recall will operate entirely locally. But that wouldn’t stop a malicious actor from exfiltrating a user’s snapshot history by first infecting the victim’s machine with infostealer malware or hijacking a computer with remote access software, warned experts – notably among them, cybersecurity veteran and former Microsoft senior threat intelligence analyst Kevin Beaumont.
Microsoft reacted quickly to the negative feedback, instituting new policies and security mechanisms intended to mitigate some of the identified risk. “Even before making Recall available to customers, we have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards,” the company stated in a June 7 blog post.
As a result of these changes, Microsoft noted that the Recall feature would now arrive off by default in Copilot+ laptops and require users to opt in. Moreover, full use of Recall would now require users to have enabled the Microsoft Hello secure passwordless login feature so that they can provide user proof of presence. And Microsoft said it would introduce “additional layers of data protection including authentication-driven ‘just in time’ decryption” for snapshots, and an “encrypted search index database.”
But even that ultimately wasn’t enough, as Microsoft on June 13 announced that it would instead preview Recall at an unspecified later date, and only within its Windows Insider Program (an open software testing initiative) at first.
Just last May, Charlie Bell, EVP at Microsoft Security said in a blog post that it was “making security our top priority at Microsoft, above all else – over all other features.” This announcement came about one month after the release of a report by the U.S. Department of Homeland Security’s Cyber Safety Review Board that examined the Summer 2023 Microsoft Exchange Online intrusion and made key security recommendations based on lessons learned.
Tech and software development typically requires a delicate balance between innovation and security. Companies want to be on the forefront of the latest tech trends, allowing them to keep up with competitors and entice customers, but they also don’t want to expose users to new vulnerabilities that could ultimately result in data breaches, stolen credentials and violations of privacy regulations.
For this reason, tech developers are encouraged to incorporate security, privacy and regulatory compliance continuously throughout their software development lifecycle. Good AppSec or DevSecOps doesn’t just mean ensuring your product is free of coding vulnerabilities. It also means having protections in place that prevent malicious actors from co-opting your product’s features for nefarious purposes. Because if something can be exploited for illicit financial gain, you better believe that somebody will eventually find a way to do it. And at that point your feature isn’t really a feature anymore – it’s a bug.