I have been thinking about the direction Information Security will take these next five years. The pace at which change is occurring in the daily lives of human beings is simply astounding; with new technology making its way into our homes and garages with unprecedented speed and ease. So, here are some things to think about when it comes to the next generation of Information Security.
Security of Electronic Personal Assistants ( think Alexa, Siri, OK Google and whatever Samsung is building). As new “skills”/ features/capabilities get added to Electronic Assistants, ( presently on Amazon, there are 12,000 plus and growing for Alexa ), the more they will be able to do, thus the potential for an effective hack goes up as well. Another way of putting it, the more aspects a of our lives a Digital Assistant can affect, the deeper or more intrusive a hack can be.
Much of the focus around security for Digital Assistants has been in the development, (secure APIs, signed requests from API to cloud service, and again, in Amazon, Security using IAM Roles to determine what resources an API call can access). Although this is good, it leaves out a large piece of the pie, and that is: authorization, ANYONE can talk to my Alexa, Google Assistant or Siri and get them to perform tasks. Burger King already did this by hijacking Google Assistant.
I am aware that some major banks have been working to gain an edge and build Alexa skills that help sell their services, things like signing up for new credit card offers, being able to book a trip on your new bankcard. Electronic Assistants also have access to our calendar, they can call for emergency services, order a Lyft, post to Slack, check your bank balance, turn your lights on and off, activate your Roomba – you get the idea – Electronic Assistants can do things that create a physical action in the real world.
So, really then, all someone would have to do to have access to all of the contexts of your life which your Electronic Assistant controls, is, one- break in to your house, or two, set up their own Electronic Assistant on your account from wherever they are. One method requires bypassing a $40 dollar quickset lock, the other requires your knowing the name and password of the service hosting your Electronic assistant.
To solve this, our Electronic assistants will need to know our voices, and only take commands from their authorized owners. Combine this with Two Factor, maybe an RFID ring you could wear, or a verbal passphrase that would “open” your Electronic assistant for 5 minutes so you can command it. Vendors could also program location into them; and allow them only to work from inside your home – cutting off the chance of someone setting one up in another part of the world.
The second topic where I see Information Security will be needed is:
Security of Autonomous Vehicles
There was a revelation lately that you can confuse a self-driving car by placing stickers on a stop sign
I am certain that example is just one of thousands of methods which could be performed in the real world to disrupt the sensors and programing on Autonomous cars. My mind can come up with all kinds of other ways: additional yellow stripe paint on the road, the use of high intensity light (laser pointers) to interfere with cameras on cars, placing a mannequin in the road to stop an autonomous car indefinitely, create a condition of conflict which the programmers did not anticipate, red and green lights at the same time, driving with a silhouette cardboard person attached to the front of the car. You get the idea. Autonomous vehicles will be easy to disrupt. ( at least the first few generations anyway )
Again, first thing that comes to mind in thinking about solutions is using some kind of two-factor method here may prove helpful – provide a second method of verifying whether a condition is true or false. For the stop sign, in combination of recognition of an octagon shape with letters, perhaps the car could pre-verify the existence of the sign by querying an enhanced map database that has citywide sign locations and meaning, ( yes, I know that can be hacked too, but difficulty goes up from slapping a sticker on the sign to a full fledge DB hack – ) Add another factor, make the signs, stoplights, guardrails ” intelligent ” and they could have their own unique electronic signature and communicate with the car. Failsafes are a given when conditions are not met. Humans could also be a second factor when the machine cannot verify if a condition is true, the car could query you. ( although that does not instill the kind of faith car companies want people to have in autonomous cars).
Last, use machine learning as a solution. Most of us drive to the same places everyday; and the car could be taught all that it needs to know from the route the first few times through. This method, with a combination of other methods would make autonomous cars more safe for human beings. So… for all you security die hards out there, YES, all of these solutions can be hacked. The level of complexity rises, however. I am only suggesting methods to make autonomous cars MORE secure and safe, fully recognizing that the only safe way to not have your car hacked is to buy a car from 1983 or before; or if you are really paranoid, roll 19th century style with horse and buggy, top hat and monocle.
Stay Safe! Stay Secure!