NVIDIA has developed a new technology known as Neural Radiance Field (aka NeRF) which uses machine learning to generate 3D objects from two dimensional photos. An iteration of it, known as Instant NeRF, is claimed to be able to complete this process in just seconds by using only a handful of reference photos.

As shown via a demonstration video, Instant NeRF is able to generate a 3D model and surrounding environment based on only four photos which feature different angles of a subject. The company notes that the images must be taken without too much motion, otherwise the result would be blurry. Despite the limited angles provided, the system is able to “fill in the gaps” of the 3D scene by interpolating what wasn’t captured in the photos. 

NVIDIA says that this iteration of the technology greatly surpasses the earlier versions of NeRF, which took minutes for the system to render a 3D scene. However, it also took hours to train the initial version of its AI in order to achieve this result. Instant NeRF – on the other hand – only takes seconds to train, thanks to the newly developed multi-resolution hash grid encoding technique that is optimised to run efficiently on its GPUs. It added that the system can even run on a single GPU, provided that the card is equipped with tensor cores that provide a performance boost for artificial intelligence processing.

While Instant NeRF would undoubtedly provide benefits to video game and virtual reality development, NVIDIA hopes that the technology could also lead to advances in fields such as robotics and driving systems. This could be used to train the latter systems to understand the sizes and shapes of real-world objects which would aid in the development of their autonomy.

(Source: NVIDIA [newsroom])

The post NVIDIA’s NeRF AI Can Render A 3D Scene From A Set Of Photos In Just Seconds appeared first on Lowyat.NET.