Column: Apple, the FBI, and the Internet of Things: Your whole house is open to attack
The unfolding showdown between Apple and the FBI is almost invariably depicted in terms of the security and privacy of your smartphone.
That’s a huge mistake. What really hangs in the balance is the security of every modern device in your house — your refrigerator, thermostat, home alarm system, even your light switches and baby monitors — and the privacy that can be compromised by hacking any of them.
This is the frontier of the so-called Internet of Things: Privacy vulnerabilities have spread from your Internet-connected computers and phones to household devices that can give hackers, whether working for the government or acting illegally, access to a household network.
These insecure devices can result in ‘stepping stones’ into the home for attackers to mount more extensive attacks.
— Sarthak Grover and Roya Ensafi, Princeton University
The FBI’s demand that Apple compromise the security of the iPhone used by one of the San Bernardino attackers could end up making all these devices less secure, when government policy should be aimed at making them all invulnerable.
Security expert Brian Krebs put the risk succinctly in a recent blog post: “Imagine buying an Internet-enabled surveillance camera, network attached storage device, or home automation gizmo, only to find that it secretly and constantly phones home to a vast ... network run by the Chinese manufacturer of the hardware.” Krebs calls this “the nightmare ‘Internet of Things’ scenario. ... The IP cameras that you bought to secure your physical space suddenly turn into a vast cloud network designed to share your pictures and videos far and wide.”
Krebs was referring to a home surveillance camera by the Chinese firm Foscam, which came with the networking capability written in — and hard for anyone but a trained network engineer to disable.
But untold other networked appliances have been discovered to have security vulnerabilities. Digital researchers at Princeton recently reported vulnerabilities in a large number of household devices. Among them, the Nest digital thermostat was transmitting unencrypted location information about the homes in which it was installed (Nest, which is owned by Alphabet, formerly Google, fixed the vulnerability after it was reported); the Pix-Star web-enabled digital photoframe was transmitting traffic to and from the device; and the Sharx home security camera was transmitting unencrypted video outside the home in a mode that could be intercepted.
Fortune reported last year that a Samsung refrigerator that allowed owners to display their Gmail calendars on a screen in the fridge door could reveal the owners’ Gmail logins to anyone who could gain access to their home Wi-Fi networks. The search engine Shodan has a whole section allowing subscribers to view unsecured webcams; security researcher Dan Tentler told Ars Technica that the feeds include “images of marijuana plantations, back rooms of banks, children, kitchens, living rooms, garages, front gardens, back gardens, ski slopes, swimming pools, colleges and schools, laboratories, and cash register cameras in retail stores.”
These insecurities occur largely because consumer manufacturers focus on the convenience of having a device that can be controlled remotely by its owner from a smartphone or tablet over the Internet. Sarthak Grover and Roya Ensafi, the Princeton researchers, observed that manufacturers often design such devices without any way to close software loopholes: “In some cases, a user may not even be able to log into the device.” That’s worrisome because “these insecure devices can result in ‘stepping stones’ into the home for attackers to mount more extensive attacks.”
The problem is immense; in 2010, Columbia University experts identified more than 500,000 publicly available devices with built-in security flaws — a “conservative” estimate of “the actual population of vulnerable devices in the wild.”
Government regulators could force manufacturers to pay more attention to the security of their networked products, but they act in only a fraction of cases and, for the most part, toothlessly. In 2014, the Federal Trade Commission settled a case with security camera maker TRENDNet, which marketed cameras “for purposes ranging from home security to baby monitoring, and claimed in numerous product descriptions that they were ‘secure.’ ” In fact, the FTC said, the company’s “lax security practices led to the exposure of the private lives of hundreds of consumers on the Internet for public viewing.”
TRENDNet was required to notify consumers of its products security flaws and how to fix them and to stop misrepresenting the devices as secure. It paid no monetary penalty, however. Since then, the FTC has issued policies urging companies “to adopt best practices to address consumer privacy and security issues.”
What does this have to do with the Apple-FBI battle? Potentially a lot. Traffic from networked devices in a home or office isn’t necessarily one way; as some of these examples show, unsecured devices also could be used as ingress points to access users’ email or cloud data accounts.
As these devices become smarter and better-connected, the vulnerabilities multiply. It’s conceivable that not only hackers, but law enforcement authorities, will seek to exploit them to circumvent obstacles designed into computing devices, as Apple has tried to do with its latest-generation iPhones and operating systems.
That possibility was hinted at by computer scientist Steven M. Bellovin of Columbia and two colleagues in 2014, when they wrote about “lawful hacking” as an alternative to the FBI’s campaign to force device and software makers to build “back doors” into secure data systems that could be opened only by law enforcement agencies armed with court orders.
“Instead of building wiretapping capabilities into communications infrastructure and applications,” they wrote, “government wiretappers can behave like the bad guys. That is, they can exploit the rich supply of security vulnerabilities already existing in virtually every operating system and application to obtain access to communications of the targets of wiretap orders.”
The authors acknowledged that this approach raises “ethical issues”: “Once an exploit for a particular security vulnerability leaves the lab, it may be used for other purposes and cause great damage. Any proposal to use vulnerabilities to enable wiretaps must minimize such risks.”
The authors were writing chiefly about wiretaps, but the idea they outlined could be easily extended to the proliferating networked devices, which are so much more insecure. There lies the best possible outcome of Apple’s fight to maintain and enhance the security of its iPhones: If the judges hearing the case lay down clear boundaries for law enforcement’s ability to access personal data by any means, we may all be more secure.