social network

NVIDIA has taught AI to “slow down” any video, simulating the effect of accelerated shooting

The received shots almost do not concede to results of work of expensive chambers.

Researchers from the company NVIDIA talked about the neural network they learned, which is capable of “slowing down” the video, shot at 30 frames per second, increasing this frequency to several hundred.

With the help of several Tesla V100 GPUs and PyTorch’s “in-depth training” system, the AI ​​”fed” eleven thousand videos recorded at 240 FPS. On their basis, he was able to “predict” and supplement the video with hundreds of frames with missing information.

For clarity, the company compared video, reproduced at 12% of the original speed with the results of the algorithm.

On the received frames it is difficult to notice any artifacts, and movement on them smooth. Resolution and clarity in this case does not differ from the source.

AI works are still inferior to the videos recorded, for example, with the help of Phantom cameras, the cost of which reaches tens of thousands of dollars. However, the advantage of developing NVIDIA in its versatility.

With the help of a neural network, you can even more “slow down” the clips with “slou-mo”. The algorithm was tested as an example on the video from the YouTube channel The Slow Mo Guys.

Developers from NVIDIA note that despite the fact that support for shooting at 240 FPS is available in modern smartphones, it is still very energy-consuming and not very practical in some cases.

Neuronet can work with already filmed rollers, despite the fact that it takes more time to process the video than for native shooting.

Back to top button