DeepMind a UK-based Google’s sister company, has recently developed an AI that can render entire scenes in 3D after having only observed them as flat 2D images. AI researchers at the cutting edge are trying to teach machines to learn like humans. They trained an AI how to guess what things look like from angles it hasn’t seen.
DeepMind’s scientists came up with a new Generative Query Network (GQN), a neural network designed to teach AI how to imagine what a scene of objects would look like from a different perspective. The AI observes flat 2D pictures of a scene and then tries to recreate it. What is the best part is that DeepMind’s AI doesn’t use any human-labelled input or previous knowledge. It observes as little as three images and goes to work predicting what a 3D version of the scene would look like.
Think of it like taking a photo of a cube and asking an AI to render the same picture from a different angle.Things like lighting and shadows would change, as well as the direction of the lines making up the cube. AI using the GQN has to imagine what the cube would look like from angles it’s never actually observed it from, in order to render the requested image.
The researchers are working towards fully unsupervised scene understanding. Currently the AI hasn’t been trained with real-world images, so it follows that the next step would be rendering realistic scenes from photographs. It is possible that in the future DeepMind’s GQN based AI could generate on demand 3D scenes that are nearly identical to the real world, using nothing but photographs.
Now this new technology is definitely a great and we can’t wait to see when it will be available for now we don’t have any confirmation about when it will be release but we sure provide such information as soon it will be announced by the company.