It was interesting to read about the new 8-strong Artificial Intelligence team at Facebook applying so-called 'deep-learning' techniques which apparently use simulated networks of brain cells, or the equivalent of 'neural nets', to process data. The application of these kinds of techniques can take machine learning beyond simpler explicit forms of data (like metadata or keywords) and into a deeper understanding of the connections and implicit meaning in data. This can then be used to provide a better comprehension of things like context and emotion when analysing data such as newsfeed content.
There's some interesting people in the Facebook AI group including the founder of facial recognition startup Face.com which Facebook acquired last year, and some intriguing potential applications of this kind of work. Facebook has already invested quite a lot in facial recognition through it's 'tag-suggest' feature that identifies faces in newly uploaded photos by comparing them with pictures in which the users have previously been tagged. And it was notable that Facebook recently updated its data use policy to enable them to use profile pictures to suggest tags. They also recently acquired Mobile Technologies, a company that has developed speech recognition and machine translation technology.
Similarly Google have patented new facial recognition technology that could be used to unlock mobile phones, and they have of-course done lots of work in speech and visual recognition to enhance search. Their rapidly improving capability to interpret implicit meaning in data is starting to become really interesting. Take a look at this search result:

There is no mention of the word 'Gigli' in that search, but Google is able to interpret that using the words 'Ben Affleck', 'Film' and 'Terrible' should lead to that result, indicating the shift away from a reliance on keywords and toward results powered by more contextual forms of data (some considerable debate about what this means for SEO practice). The development of their knowledge graph enables them to use more semantic search information gathered from a wide variety of sources to enhance results like this.
As the richness of data accessible to companies like Facebook and Google exponentially increases, so does the potential capability to deliver enhanced services capitalising on these more contextual connections between data. Including, of-course, advertising.
It makes me wonder whether this kind of work will actually provide the answer to the great unanswered question around search - how to make in-video content searchable. If video content could be turned into data, and the implicit meanings and connections in that data could then be interpreted, the possibilities are amazing.
HT Ben Davis for the Ben Affleck visual