Photorealistic Gaming

Intel Develops Image-Enhancing AI for Photorealistic Gaming

Follow Us:

Key Highlights:

  • Intel develops deep-learning system models that will convert 3D rendered images into photorealistic images
  • The neural network has already been tested on games like Grand Auto Theft 5
  • Research developers have replicated strong visualisations of Los Angeles and southern California

Relatively High Framerate

In the current times, giant tech companies have successfully managed to enhance gaming experience for gamers using technological advancements. With various tech and software innovations, tech enthusiasts have managed to elevate gaming experiences. 

Further advancements in the AI sector like graphic 3D and photorealistic images have been the real boon for gamers. Recently, Intel has unveiled a system that converts 3D rendered graphics into photorealistic images. The American based multinational corporation and technology company has developed its deep learning system that has already shown impressive results upon testing.

Intel’s Deep Learning System

Recently, one of the tech giants, Intel, has developed a new neural network in an attempt to improve gaming experience for gamers. Intel’s deep learning system converts 3D rendered graphics into photorealistic images. The neural network has already been tested on games with strong visualisations like Grand Theft Auto 5. The developers of the system have already replicated the visuals of Los Angeles and southern California in an intricate manner and recreated it perfectly. 

With Intel’s new machine learning system, the graphics turn from high-quality synthetic 3D to real-life depictions, although there are minor technical glitches. What makes the entire process outstanding is the pace in which Intel’s AI carried out the task. Intel’s AI has done the entire procedure at a relatively high framerate in comparison to photorealistic rendered engines that take minutes or even hours to complete a single frame. What makes the process noteworthy is that the current outcomes are just the preliminary results of the entire procedure. According to researchers, the deep learning models can be optimized further to work much faster. The entire procedure indicates that real-time photorealistic game engines are on the horizon according to the suggestions of some analysts. 

Read more: UK’s gaming industry becomes a top-employer Amid Pandemic

Composition of the Model

Although Intel’s researchers have not provided a detailed elaboration about the deep learning system developed by them, a paper has been published on arXiv and a video on youtube that provides vital hints on the kind of computation power one would require in order to run the latest model. 

The full system is composed of several interconnected neural networks. The G-buffer encoder transforms different render maps into a set of numerical features. These G-buffers are maps for surface normal information, depth, albedo, glossiness, atmosphere, and object segmentation and are obtained directly from the game engine. The neural network uses convolution layers to process this information and output a vector of 128 features that improve the performance of the image enhancement network and avoid artifacts that other similar techniques produce. The image enhancement network takes as input the game’s rendered frame and the features from the G-buffer encoder and generates the photorealistic version of the image.

The remaining components, the discriminator and the LPIPS loss function, are used during training. They grade the output of the enhancement network by evaluating its consistency with the original game-rendered frame and by comparing its photorealistic quality with real images.

Cons Associated with the Deep-Learning Model

There are a few cons associated with Intel’s deep learning model. The first one is the interface cost for image enhancement. In order to conclude whether gamers will be able to afford the technology on its availability and run it on their computers, one has to calculate interface costs and/or how much memory and computing power is required to run the trained model. Basically, with the current mid- and high-end graphics cards, one will have to choose between low-resolution photorealistic quality and high-resolution synthetic graphics.

 The sequential and non-linear nature of deep learning operations also poses a much bigger problem. This means that, in addition to memory, one needs high clock speeds to run all these operations in time. Another vexing problem is the development and training costs of the image-enhancing neural network. Any company that would want to replicate Intel’s deep learning models will need three things: data, computing resources, and machine learning talent.

Although Intel’s photorealistic image enhancer shows how far one can push machine learning algorithms to perform interesting feats, it will take a few more years before the hardware, the companies, and the market will be ready for real-time AI-based photorealistic rendering.

Read more: Grow your company with Artificial Intelligence in 2021

Share:

Facebook
Twitter
Pinterest
LinkedIn

Subscribe To Our Newsletter

Get updates and learn from the best

Scroll to Top

Hire Us To Spread Your Content

Fill this form and we will call you.