Back in the ’80s, gamers were blown away at the 8-bit system Nintendo when compared to the graphics on the Atari 2600. The 16-bit consoles were even more revolutionary, with Sonic the Hedgehog using Mode 7, and Star Fox experimenting with 3D-technology.
Although the Playstation and N64 acted as growing pains for 3D graphics, the PC has been hitting its stride years before consoles could even compete. Nowadays, it’s not surprising to see photorealistic graphics, stylized cartoon visuals, and true to life environments that you could mistake for reality.
In 2019, a company called Quixel released a short film titled “Rebirth.” The impressive display of the Unreal Engine 4 seemed impossible five years ago, but a casual viewer could mistake the environments in the film as real. Although the cinematic isn’t playable, we can imagine playing games with those graphics isn’t far off.
Casino technology has seen a quick rise in graphics, as well. Ever since slot machines became automated by touch, brick and mortar casinos have had to hire game developers to support their games. Online casinos use similar technology, with Indian online casinos like Casumo hiring companies like Net Entertainment for their titles.
The uncanny valley is the next hurdle to perfecting gaming technology because most video games include you either controlling or interacting with people. There’s nothing more unsettling than watching something resembling a human, but not quite, moving its mouth and speaking. Mass Effect Andromeda failed to push that boundary, making the experience of conversing with the world’s characters creepy.
Another hurdle is in-game graphics versus cutscene graphics. When you’re controlling your character, graphics and frame rate has to be reduced to compete with information on the screen. During a cutscene, the game plays as scripted. That’s why gamers aren’t impressed by new game announcements with incredible graphics anymore. Players know graphics will be scaled down upon release.
Displaying real human emotions is even an issue when actors use motion capture. In L.A. Noir, every actor was motioned captured in hopes of imitating what a face will do when we lie when we’re nervous, and any other subtle emotion. Although the technology was impressive for its time, it didn’t do the job of expressing emotion appropriately. Instead, characters overdramatized their faces and looked ridiculous.
Skin is the main issue with realism. “Better skin makes characters more believable,” says Phil Scott, Nvidia’s lead technical evangelist in Europe. “and the eyes are the gateway to the soul. As humans, we look at each other in the eyes, we communicate through the eyes; so even though they take a tiny portion of the screen space, developers need to spend a lot of rendering effort on making them realistic.”
Video games are getting closer to making photorealistic graphics a reality. Games like The Last of Us, Watchdogs, Final Fantasy 15, Detroit Become Human, and Until Dawn showcase impressive graphics that, thankfully, don’t make the game characters freaky to look at. So, where do we go from here?
Environmental graphics have reached a point where it’s difficult to tell if it’s real, with a human character we may take another ten years to make them truly real. “Real-time graphics are probably a decade or more behind film in terms of what can be conveyed visually, and film isn’t ‘done’ yet either. I’d say we’ve got decades more innovation to come.” says Tony Tamasi, senior vice president of content and technology at graphics hardware specialist Nvidia.
After we reach the point where characters in video games look like people, developers are still going to spend a lot of energy to keep gamers immersed. If a character doesn’t climb a ladder correctly or glides upstairs, the photorealistic experience will be ruined. There are many impressive games with their own style that could forever trump any realism real-life graphics could ever obtain.