Every year for the past 10 years the media, consulting agencies in our field and even our clients predict our demise – that is – they predict that we and companies like ours will soon be bankrupt because a 3D scanner will be available on every smart phone. So why hasn’t it happened yet? And are we that much closer to our downfall now that the iPhone X has a built-in 3D sensor? Why is our industry only growing?
Last year, I actually wrote a blog article entitled “why isn’t a 3D scanner in your iPhone yet”. By the time it was scheduled to be published on our website, Apple announced their iPhone X. I couldn’t print it. I had to wait and see – maybe Apple finally did it and we are about to go bust. Well, apparently, according to reviews and personal anecdotal data, the 3D face recognition does work relatively well in that phone and kudos to them for that. That’s hard to get right because the feature has to work in all sorts of lighting conditions and climates (I’m thinking of a bright sunny day in the winter in Siberia or a damp, rainy day on the island of Bali with temperatures reaching 40C/110F).
That being said, getting face recognition to work on a smart phone is one challenge, getting professional results akin to what you expect from today’s professional 3D scanners presents a long list of additional challenges. Why?
- Battery. To recognize a face, you only need one frame of 3D data. Granted, if the system doesn’t recognize you right away, it will keep trying to take 3D pictures of you, to match you with what is in the database. But one good snap shot (frame) of your face is all you really need. To get a good 3D scan of an object, you generally need thousands of frames because you have to go all the way around the object, scanning underneath and from above. That means that the camera keeps working (our scanners and others on the market scan 10-20-30-40 frames per second). This drains the battery very quickly. What is the use of a 3D scanner in a phone which drains the entire battery in 5 minutes?
- Low resolution. There is often a trade-of between “high resolution” and “ease of use”. If you want to capture anything in 3D without fear of losing tracking during scanning, you need a large field of view (i.e. the more your camera sees, the easier it is for the software to stitch all the results together in real-time). But the larger your field of view is, the lower the resolution. Lower resolution in an image means lower quality of data (think of a low-res photo).
From the few apps that were released for iPhone X that use the sensor as a 3D scanner, you can see that the resolution capabilities of the camera are not great. If I had to guess, it would be somewhere around 5mm, whereas most professional scanners measure their resolution not in mm, but in tens/hundreds of microns (like 0.05-0.5mm – 10 to 100 times better).
- Not enough processing power. Although there are algorithms that can capture and post-process data in real time (i.e. right on your device), the results leave much to be desired. Currently, mobile CPUs are not “strong” enough to process 3D data in real time.
So Apple seems to have solved the question of lighting and temperature. That’s very impressive in itself. But the next set of challenges is even greater – a significantly more powerful battery and CPU. That’s also doable in time. But the biggest question mark for me is how to not trade low resolution for ease-of-use. This can be achieved by throwing mathematicians at the problem. I am sure there are a few of those in the world that would love to have a crack at this puzzle and solve it and if Apple really wanted to do it, I am sure they would be able to. However, all this incredible progress will take time. For now, I am happy to say, our R&D department is busy with our own ideas on how to bring to market a more powerful, cost-effective and easy-to-use device. Before the winter is out, you’ll receive some good news on that front. So, stay tuned!
CEO & Co-Founder