“What we’re really doing is building an augmented version of humanity.” Eric Schmidt, Google CEO 2001 – 2011
Your Mother’s Face
When you’re born, you cannot see your mother’s face. Two months: that’s how long you spend learning to see. Though born with all the components needed for vision: your eyes and your brain – you do not see clearly. Arguably, you also don’t know what you’re ‘seeing’.
“…[T]hough the optics of the eye are mature, infants still can’t see as well as adults because brain areas responsible for vision are still immature. To use the camera analogy, the reason that infants’ vision is blurry is because of the “film”, not the lens. The retina (the film of the eye), in addition to other visual parts of the brain, is incompletely developed in infants.” The Smith-Kettlewell Eye Research Institute, What Can My Baby See
So, you have the necessary hardware, but your software, your database, your processing power, and your neural networks aren’t yet up to scratch. They can’t take meaning of what is in the line of site. Nor can the brain cook up a response based on this stimuli.
But inside of two mere months, you begin to respond to complex signals and develop the neural processing skills required to truly see. Biological vision, in its purest form. Visual acuity. Vision no man or machine can emulate.
Your vision at birth is an analogy for the current state of visual search. So let’s start at the beginning.
The Infant State of Augmented Reality and Visual Search
The gap between the physical world and digital is immense. Despite significant pedigree and influence, the Church of Holy Technology (aka the Singularity) and us regular mortals still struggle to make meaningful connections between real world experiences and the rich digital world we’re evolving into. (Heads up: if you work in tech or search and don’t know about Google, NASA, and the Federal Government’s ‘link’ to the Singularity movement, you should.)
Augmented Reality, computer vision, mobile hardware, wearables are the next waves driving techno-evolution. If we’re intuitive, we also see they hold potential to change the very way human beings experience the world, for better or for worse.
Have I gone mental? A bit too much lead in the pipes here in London? Nope. When Google alludes to ‘augmenting humanity’ in such a sexy, mysterious way (not to mention establish a university dedicated to it) you better sit up and listen. In fact, in a recent augmented reality week long series, many of the world’s top augmented reality experts spilled these juicy nuggets about future bio-tech and AR chimera.
“The day I have a pair of funky sunglasses and walk around with non-obnoxious advertising, news, social networking, totally immersed in the world, then AR will really have arrived.” Prof. Blair MacIntyre, director of Augmented Environments Lab, Georgia Institute of Technology (see this visualised in the 2nd video below)
“And AR contact lenses, yes, they’ll happen but my question to you is this – why have it washing around on the surface of your eye when you can have it implanted inside your head? Sure there are social and ethical issues but these things will change with each generation as it becomes more acceptable.” Prof. Steve Feiner, Head of Computer Science, Columbia University
The Horror. The Horror. I hear you shriek. “Human beings would never allow this to happen!” or “Only with medical need!” But please. Think about it. Why wouldn’t you? That phone that you carry in your bag or your pocket, that you see first thing morning and night, that sleeps next to your bed… it contains the same chip. So do some of the American soldiers in the Middle East. Mobiles and wearables are merely the transition devices. You are the next carrier. Let’s keep it real here: people exchange freedom for stability and convenience every day (from Facebook privacy/image recognition to our Net Neutrality – severely threatened by the Patriot Act).
Enough Of The Highbrow Stuff: Show Me the Money
What the hell am I talking about here? See for yourself. Here’s a few more clichés for good measure: Seeing is believing. A picture paints a thousand words. But this is exactly how I felt when I first saw augmented reality in practice at the world’s leading search and social conference, Search Engine Strategies (SES) in New York at the beginning of last year.
Pretty cool, huh? With Google going quiet on augmented reality for way too long to be comfortable (update – GOOGLE GLASSES! OH YEAH!), followed by their acquisition of facial recognition startup PittPatt, and the launch of Google+, we’re not very far off of this very cool, ‘opt-in’ iteration of augmented reality.
How Does It Work? Augmented Reality’s Current Guise
You have devices which see (mobiles, web cams, wearables, and motion sensor devices like Kinect). Then there’s the retina, which processes (computer vision technologies, GPS hardware, the sensors) and the brain (augmented reality and visual search). However, we’re still lacking the physiological practice of biological vision, the physics and machine vision required to help our ‘database’ of experiences evolve. We don’t even have a database, or the neural networks required to make the invisible world around us a universal, homogenous experience, nor a context for how to respond (or serve back the correct information to the user) or true visual search.
Augmented reality can be experienced with the devices below to launch a range of digital content experiences (sound, pictures, web pages, video, 2D or 3D animations, and haptic response).
Augmented Reality: The Future of Visual Search
If computer vision and hardware innovations are the what in this equation, augmented reality is the how. What we need next is the why. Have a look at the state of theaugmented reality industry at present for yourself.
Hampered by awkward and insufficient technology, bandwidth, data charges, and most importantly – cohesive content, true visual search is yet to be achieved. We need meaning for these experiences. We need them to be organised in a way that is both beautiful and useful. What does this mean for search?
“The dominance of desktop systems in the search arena is rapidly drawing to a close. Their lack of portability, of context-awareness, reducing them to “dumb” WWW terminals useful only in office environments.
We’re on the cusp of a new world order where handheld devices and wearables will become the primary discovery and information retrieval mechanism. By 2020 between 50 and 500 billion objects will be networked. Current search taxonomy will fail and, arguably, is already.
A sleeker, smarter, more natural search interface will be required – one that enables you to touch, swipe, drag, gesticulate, talk and interact with the data in the world around you. Augmented Reality will provide that interface.” Howard Ogden, Founder: AugmentReality and Mobilistar
So what we need now is an information retrieval system that can help us to contextualise, give meaning, and respond to the Internet of Things. What will provide this lens to the brave new world?
- Social interaction in digital, primarily through social media and ‘folksonomies’ of information we provide
- Coupled with behavioural data our mobile devices broadcast (on a second-by-second basis - what you do, where you are, what you buy are all tracked)
- Advances in computer vision
Or, as one of the luminaries of search, Mike Grehan said on information retrieval and Google recently:
“You are the black box.”