0 comments / Posted on by Obliq Blog

Devices that can tell the difference between you and an imposter just from your behaviour could mean better security in the future

In the last few years, we’ve moved so much of what we do online, or into apps at the very least, that security and encryption has become more important than ever. But having everything accessible from one device also means we need to rethink some things about security if we want to keep our private stuff private.

There’s a new trend in security you can expect to see a lot of in the future, known as ‘continuous authentication’. Right now, we log onto our phones using our fingerprint or face, and it knows in that moment that it’s really you logging in… but what about an hour later? Is it still you tapping away, looking at your notes, searching your emails, logging into your Amazon account, or has someone else taken your device and started buying stuff on your stored card details?

Continuous authentication is exactly what it sounds like, checking that the person using your laptop or phone is still you any time you try to do anything sensitive, and rejecting them if not. 

Ideally, it does this seamlessly in the background, so it doesn’t interfere with your use of the device, because most people just turn off security the second it gets in the way.
Of course, the question is how best to pull this off. Biometrics are one option, of which face-scanning is the least invasive, especially for devices such as laptops and phones, which tend to sit in predictable places in front of your face when in use.


This is exactly the case for the iPhone X, which is the first solid example of continuous authentication on the market. It uses the 3D-scanning Face ID sensor on the front to check the identity of the user before it does things like autofill website passwords, so no one else gets to log into your accounts.

However, making this work requires a facial recognition system that’s extremely hard to fool, and while many future phones will have systems like Apple’s, that doesn’t help anyone with current phones, tablets or laptops. But there are other ways.
For a start, we all use our devices slightly differently, in ways that are meaningless to humans, but that computers can discern through the power of machine learning. 
There’s already software available from companies such as TypingDNA or BehavioSec that can learn your specific typing style and cadence, and can then tell when someone other than you is using your keyboard. In the case of TypingDNA, the software sits passively on your computer, recording key-press patterns when you press 44 common keys of the keyboard. 
The patterns are hugely complex and distinct, even from the same person typing the same thing, but with AI and machine learning, the system gains enough information to say whether a pattern fits you, or seems to come from someone else. TypingDNA says that its newest biometrics can tell if you’re really you just from the way you type things as short as login information.


If you’re skeptical about that working well enough, consider an approach put forward by Google at one of its IO developer conferences. Developed by Google’s Advanced Technology & Products group – run by a former head of DARPA – ‘multi-modal biometrics’ could be the future of continuous authentication on more devices.

Google’s suggestion is that you could indeed use typing patterns to work out whether someone was really them, but in combination with other factors. You could combine facial recognition from a fairly simple front-facing camera, along with location data, and even what the microphone is hearing, to make an incredibly accurate system.

If the user looks like you, and sounds like you, and the phone is in your house, AND the typing pattern matches yours, the phone can be highly certain that you’re you.

One of the really clever parts about Google’s system, though, is that it doesn’t have to just be a ‘yes’ or ‘no’ system. Instead, it proposed a ‘Trust score’ generated by the system, depending on how sure the phone was about your identity, and apps could be set up to accessible or not depending on how high the trust score is.

If one or two of these factors doesn’t seem to match, the phone could stop the user opening a banking app, but might still let them open something relatively harmless like your Music app, meaning that false negatives wouldn’t be as annoying.

Getting all this information together and analysed is another job for artificial intelligence/machine learning systems, but with more and more phones having chips dedicated for these kind of neural networking tasks built in (including the iPhone X, Huawei Mate 10 Pro, Moto X4 and likely the Samsung Galaxy S9), modern devices are literally built to handle these tasks.

True, there’s a certain existential dread that comes from your phone casting constant judgement over whether you seem enough like yourself to access your private information, but in a world where you phone can tell someone effectively everything about your past and foreseeable future, it’s better to have the AI on your side.



Leave a comment

All blog comments are checked prior to publishing

Coming Soon!

iPhone 7

OK x