The internet of things. I’m not a fan. I could be, but they are doing it all wrong. In this short intro I’ll explain, by way of a few examples, how the approach of pursuing profit without regard for security has failed, and continues to fail. I will end by suggesting what watershed moments may need to happen before something changes.
There have been a number of well known adverse side effects of the Internet of Things, and 2016 has probably been the year when its security ramifications are starting to reach the awareness of the general public. This year is the latest in a growing trend of DDoS by webcam, toaster, thermostat and lightbulb. Viagra and Cialis spam delivered by refrigerators, webcams that anyone can view, and even vibrators that spy on their users usage, reporting all back to the manufacturer. Later to the party are devices such as the Amazon Echo. It’s designed to be rather useful: you can use it to voice control home automation and gather information from Amazon in a manner akin to Siri. However, significant privacy concerns have been raised over the fact that it is essentially an always-on microphone in your home that is connected to Amazon’s data centres, and that by using it you allow Amazon access to the information that microphone allows them to gather.
Classes of devices in the IoT, and the attack vectors they present
I see there being two main classes of devices on the IoT, and with each class there are associated attack vectors and risks associated with their use. First of all there are those devices which are essentially novelty IoT devices: The fridges with Twitter, the web-enabled toaster, the lightbulb you can switch on and off from a smartphone app across the internet. These devices I consider to have no real reason for being on the internet at all. The cost/benefit analysis of having one of these devices on the internet was clearly never done:
- Benefit of internet-enabled lightbulb: I don’t have to get up and press the switch.
- Cost of internet-enabled lightbulb: A warrant for my arrest is issued after my lightbulb takes part in a DDoS of critical national infrastructure.
- Benefit of internet-enabled toaster: I can get an alert on my phone when the toast is ready.
- Cost of internet-enabled toaster: A total stranger can log in to my toaster and set it to full power until it catches fire and kills me.
The second class of devices are those devices where a reasonable argument for connectivity can be made, but for which there are worrying issues around privacy. Examples of these are Smart TVs, IP Cameras, and utility items like the Amazon Echo. All of these represent evolved forms of earlier technologies using the internet to enhance their functionality. However, the problem that is both inherent in and common to all of them is this: the extra functionality was pressed for hard because that’s where the business made its money. Security was seen as a cost, and so it was barely given any thought at all. The result was home security cameras going out that could be accessed by anyone who knew their IP. It was Smart TVs open to remote root access, and perhaps worse still, Smart TVs that deliberately spy on you even when they say they won’t.
So maybe they shouldn’t be on the internet, but what’s the worst that could happen?
Voice Activated Technologies
Well, it’s not good. Let us start with the Amazon Echo. It is an internet connected microphone that you have paid Amazon to put in your home. It is programmed to respond only after you mention a key word, but in order to know that you’ve done that, it has to listen to you constantly. It has to be ready at any moment to respond. Amazon have made a few statements and claims about their Echo device:
- They do admit that sounds made immediately before and after the wake up word are sent to them
- …but they state that the devices use encryption (type and level not made clear), so should be “hackproof”.
- Amazon also states that other sounds picked up are not relevant to the device’s usage, and are thus not transmitted to their servers.
If all of these things are true, and the device functions securely and as expected, there shouldn’t be too much to worry about in truth. However, given the track record of security from the IoT in general, as well as Amazon’s interest in revenue raising, I would not personally feel happy putting one in my home. I should probably make clear here that I don’t mean to single out the Echo nor Amazon per se, I’m simply using them as an example of the kind of thing (internet connected voice based tech) that will likely become a growing market and far more prevalent in homes as time goes on. My worries around this sort of device are that if they are compromised, either by remote code execution, or by a malicious actor with physical access (perhaps your cleaner, your babysitter, or the former partner who still has a key and you haven’t changed the locks), they will almost certainly exhibit no behaviour or other external sign that they are now listening all of the time, or that they are sending what they hear to someone else altogether. Just as with my post on Facebook, let’s look at what sort of information can be gleaned from just a microphone:
- Content of conversations is the big one
- What you talk about
- Who you are talking with
- Sensitive information such as that concerning relationships, finances, sexual or other private matters
- Financial information from telephone banking
- Couple this with a call to the target stating that you are from their bank and that they must call you back and verify their identity before proceeding. They should use the number on your website. The Echo will then capture all of their responses to the security questions (Could be defeated where a particular letter from memorable information is asked for)
- Passwords – spouses tend to trust each other enough to tell each other those
- Numbers and ages of persons in the house, relationships between them
- TV and radio viewing/listening habits
- Pattern of life – when do you get up, how long do you spend in the shower, do you sing in the shower, how long do you take to get dressed, when do you leave for work, when do you get back…
- What your future plans are – based on your requests for info to the Echo
- Locations of sensitive items within the house – i.e. you tell your partner you are hiding the expensive object in a particular place before leaving the property for a week on holiday.
One way to think of the ramifications though is this: would you change your activities if a blindfolded stranger was allowed to sit in a corner of the room? If they answer is yes, you need to be thinking about the implications of owning an Amazon Echo or other similar device. In particular, you may wish to consider the comments of Joel Reidenberg, director of the Centre on Law and Information Policy at Fordham Law School in New York City:
“These devices are microphones already installed in people’s homes, transmitting data to third parties, so reasonable privacy doesn’t exist. Under the Fourth Amendment, if you have installed a device that’s listening and is transmitting to a third party, then you’ve waived your privacy rights under the Electronic Communications Privacy Act.”
If there’s no reasonable expectation of privacy, it wouldn’t really be that shocking for Amazon to amend their terms of service (and of course, we’d all click agree without reading them) so that they can start collecting data on your conversations and analysing that into a dataset which has commercial value – for example “How many women under the age of 24 in the SE1 postcode of London have mentioned concern about Brexit?”, to pick just one application. With the Echo being tied to your Amazon account, they already have a lot of the demographic information they need, as well as product purchase history (and thus inferred interests) and now they can enrich that with the subject of your conversations. That’s when it becomes very valuable. Imagine the recent US election – staff from the DNC or RNC could buy from Amazon summarised data by county or state on opinions expressed for and against their respective candidate. With that data they’d be far more able to focus their campaign in actual key areas, not predicted key areas.
There is another type of internet microphone to worry about: smart children’s toys. Teddy Bears that listen to your child and talk to them. They listen to questions and answer them. Unlike the Echo which probably does have reliable encryption, and almost certainly isn’t transmitting non-relevant conversations to Amazon, a Teddy Bear isn’t much use as a child’s toy if you first have to teach the child to say “Ok Teddy Bear” before talking to it. The mic is much more likely to remain ‘hot’ than it would do in an Echo. I’m also considerably more worried about the cyber security standards of a (probably Chinese) toy manufacturer than I am about Amazon. If it’s a bear that can also use facial recognition to recognise your child, be even more worried.
IP Cameras/Smart TV Camera
I shouldn’t even have to explain this one: Someone can see inside your house over the internet, and it isn’t you. Let’s hope that you don’t end up on an amateur porn website. Let’s hope you don’t have a stalker.
Internet Locks – Yeah just leave your house key on the internet, it’ll be fine
You can now change the locks on your home to software driven locks which use a combination, or perhaps a NFC smartphone to grant access. You can also change the combination over the internet. If this is compromised, you don’t have a lock on your door. The attacker walks in, takes what they want, resets the door lock on departure leaving no sign of forced entry. No no no no no.
Why do these hacks keep happening? It’s 2016 for God’s sake!
These things keep happening because, despite more than 20 years of working with computers and servers and other devices connected to an internet that was public and open, the industry continues to make the same mistakes in a cyclical manner with every evolution in connected technology, and it can be summarised thus:
- Technology advances such that a new genre of consumer product can be released
- Management want to be first to market, getting their product in stores first is paramount
- Properly securing the device increases costs, reduces profits, and might delay release
- Security is de-prioritised; all security does is make it harder to do business
- Product ships with default creds, hard coded creds, no creds required at all, or other gaping security holes
- …But it’s first to market, makes a lot of money, business management hail it as a textbook example of product development
- Bonuses all round, champagne.
- Security flaw detected, massive data breach occurs, or smart product is a bot in a DDoS attack, or streams children being undressed live on the internet (these are all real examples)
- Management lambast security for allowing this to happen
- Lessons learned: nil
- Return to step 1
When will the madness end?!
Realistically, this sort of bad practice will only end if something really terrible happens that forces new minimum security standards on providers of internet connected equipment. However, the global nature of the internet, and the myriad of different legal frameworks across different countries and jurisdictions makes it highly unlikely we’ll see something that stops this. The only way to make changes is to make the cost of failure in security outweigh the profits. Companies should know that if they release a product to market that is woefully insecure and instrumental in a breach of cybersecurity, that it will cost them significantly, and that depending on the severity of the matter, senior people within the company will go to jail. Sadly I think it may take something like the uncovering of a global paedophile ring exploiting a particular brand of child’s toy or babycam, or perhaps several high profile stalking/murder cases. It shouldn’t have to come to something like that to shake people from their complacency, but as the council of great philosophers known as Wu Tang Clan once explained: Cash Rules Everything Around Me.