Google’s London-based AI subsidiary, DeepMind, has developed an algorithm that can render full 3D models of objects and scenes from regular 2D images. Called the Generative Query Network (GQN), the new algorithm can be used for a wide range of applications, including robotic vision, VR simulation, and more. All this, and much more was revealed yesterday when details about DeepMind’s research was published in the Science magazine.
Above: DeepMind’s GQN imagined this maze from static images.
According to the report, the GQN can compose and render an object or a scene from any angle, even if it’s only fed with a handful of 2D images. That’s quite a change from the way AI works in general, where the system needs to be fed with millions of images that are painstakingly labeled by humans. In fact, the new algorithm can also apparently render the unseen sides of objects and generate a 3D view from multiple angles without the need for any human supervision or training, as it has the power to ‘imagine’ how the scene might look like from the other side.
The GQN first uses images taken from different viewpoints and creates an abstract description of the scene, learning its essentials. Next, on the basis of this representation, the network predicts what the scene would look like from a new, arbitrary viewpoint – ScienceMag
Impressive as it is, the technology does have its limits. According to the researchers, the GQN has only been tested on relatively simple scenes with a small number of objects, because it still lacks the technological sophistication that would allow it to generate more complex 3D models. “While there is still much more research to be done before our approach is ready to be deployed in practice, we believe this work is a sizeable step towards fully autonomous scene understanding”, the researchers wrote.
Above: The GQN creating a manipulable virtual object from 2D sample data.
It is worth noting here that various DeepMind algorithms have been performing some pretty impressive tasks of late. Last year, DeepMind’s AlphaGo taught itself to play the ancient Chinese board game, Go, while another DeepMind AI system last month learned to navigate its way around a maze in a way that wasn’t entirely different from the way a human brain functions.
Above: Another 3D maze imagined by the GQN.
However, the most talked-about DeepMind project has to be the AlphaZero that was able to beat the highly-acclaimed StockFish chess program back in December after posting an unbeaten record in a 100-game series. The AI won 28 and tied 72 games, winning the contest hands-down against the best chess program in the world without any human intervention or assistance whatsoever.
Credits: venturebeat
Arif Khoja is a Developer. He is a Javascript Enthusiatic who loves logical programming and has first hand experience in building a cutting edge internet product using Angular. He is also an open source freak and keen about learning and sharing. He writes Javascript both frontend and backend. He loves learning and sharing tech all the time. He also has a hands on experience in SEO and writes articles about latest emerging technologies.