Apple recently published a security white paper describing in some detail its advancements in the use of Face recognition - described as having an error rate of 1 in a million and using highly advanced 3D projection technology to thwart would be data thieves.
What the technology can do is extremely clever, but for all the facts and figures, perhaps what's most interesting is what can not be described or articulated: Apple is using two neural networks to cross check the face recognition process and each other to ensure that the system is not only secure, but remarkably accurate. But why is it so accurate, and can the system be hacked or abused?
Surprisingly, exactly why the system is so accurate may not be entirely quantifiable - increasingly the science of neural networks is moving beyond their creator's ability to understand them.
Arthur C Clarke wrote that "Any sufficiently advanced technology is indistinguishable from magic."
Neural networks and deep learning systems like IBM's Watson and Google's DeepMind are, in some use cases, achieving "magical" results that better that of the equivalent human skill - but we are not always sure why!
DeepMind is exceeding the accuracy of seasoned experts when diagnosing certain conditions, but precisely why it is better may be too difficult for us to understand:
“...the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.”
Systems like Face ID and DeepMind are resulting in "blackbox" systems - systems that generate the correct results, but for reasons that are increasingly difficult or impossible for us to quantify in human terms.
What does this mean when using technology for evidential purposes? If an iPhone with Face ID is adduced as evidence against a suspect in a murder charge, are we, as Computer Forensic experts, comfortable with the explanation that "it just works but we're not sure why"?
Will hackers be able to design or detect a bug in the neural network that could be used to create unexpected results, race conditions, hacks and easter eggs?
A brave new world of neural forensics awaits!
Image from Futurama, 20th Century Fox Television.
The secure neural networks were trained specifically for Face ID resolution using over a billion images, including infrared images and depth maps, that Apple collected during informed studies conducted around the world, with representative groups of people from a wide spectrum of origins and backgrounds.