Devices that can tell the difference between you and an imposter just from your behaviour could mean better security in the future
In the last few years, we’ve moved so much of what we do online, or into apps at the very least, that security and encryption has become more important than ever. But having everything accessible from one device also means we need to rethink some things about security if we want to keep our private stuff private.
There’s a new trend in security you can expect to see a lot of in the future, known as ‘continuous authentication’. Right now, we log onto our phones using our fingerprint or face, and it knows in that moment that it’s really you logging in… but what about an hour later? Is it still you tapping away, looking at your notes, searching your emails, logging into your Amazon account, or has someone else taken your device and started buying stuff on your stored card details?
Continuous authentication is exactly what it sounds like, checking that the person using your laptop or phone is still you any time you try to do anything sensitive, and rejecting them if not.
This is exactly the case for the iPhone X, which is the first solid example of continuous authentication on the market. It uses the 3D-scanning Face ID sensor on the front to check the identity of the user before it does things like autofill website passwords, so no one else gets to log into your accounts.
HOW IT MIGHT WORK ON ANDROID
If you’re skeptical about that working well enough, consider an approach put forward by Google at one of its IO developer conferences. Developed by Google’s Advanced Technology & Products group – run by a former head of DARPA – ‘multi-modal biometrics’ could be the future of continuous authentication on more devices.
Google’s suggestion is that you could indeed use typing patterns to work out whether someone was really them, but in combination with other factors. You could combine facial recognition from a fairly simple front-facing camera, along with location data, and even what the microphone is hearing, to make an incredibly accurate system.
If the user looks like you, and sounds like you, and the phone is in your house, AND the typing pattern matches yours, the phone can be highly certain that you’re you.
One of the really clever parts about Google’s system, though, is that it doesn’t have to just be a ‘yes’ or ‘no’ system. Instead, it proposed a ‘Trust score’ generated by the system, depending on how sure the phone was about your identity, and apps could be set up to accessible or not depending on how high the trust score is.
Getting all this information together and analysed is another job for artificial intelligence/machine learning systems, but with more and more phones having chips dedicated for these kind of neural networking tasks built in (including the iPhone X, Huawei Mate 10 Pro, Moto X4 and likely the Samsung Galaxy S9), modern devices are literally built to handle these tasks.
True, there’s a certain existential dread that comes from your phone casting constant judgement over whether you seem enough like yourself to access your private information, but in a world where you phone can tell someone effectively everything about your past and foreseeable future, it’s better to have the AI on your side.