The artificial neural networks developed in the Embed the World project can link two photos that were taken in the same location based on visual similarities in the photos, but also figure out if text describes the scene.
There’s a lot of talk about long-term goals, but small victories along the way have made Facebook incrementally smarter. Now, DeepFace is driving force behind Facebook’s automatic photo tagging. Rob Fergus, a veteran of NYU and MIT’s Computer Science and Artificial Intelligence Lab, leads the AI research team concerned with vision. His team’s work that can already been seen in the automatic tagging of photos, but Fergus says the next step is video. AI would “Watch” the video, and be able to classify video arbitrarily.
Facebook has traditionally farmed these tasks out to contracted companies, so this could potentially play a role in mitigating costs.