What do you think the future of gaming holds? If you’re like most people, probably not much. You might say that games are “dull.” But when you dig a little deeper, it turns out that in the next decade a lot will change about how we game. Es of the singularity escalation image is going to be sound, an iPhone 6 will cost about a dollar, and “the weather” in our cities will get a whole lot more interesting.
It’s hard to believe how far video games have come in such a short period of time. The first modern 3D “3D-texture mapping-shaded polygons-textured maps using polygons’ ‘ were introduced back in 1992 with the release of Wolfenstein 3D (and its Doom-esque offshoot Spear of Destiny). A year later, Star Fox hit the SNES, bringing with it full 3D polygonal graphics.
Phantasmagoria was on the Sega Genesis, but most reviewers found it to be dull and uninteresting. In 1993 3dfx released their M3D chipset, which enabled a new class of graphical technologies known as polygon tessellation, where a model is represented by a complex polygon surface and not just the simple triangle of a plane. These new technologies would play an important role in driving 3D graphics forward.
In 1997 Direct3D came along with its multitexturing capabilities, providing a unified hardware abstraction layer that allowed programmers to bypass low-level details like geometry shaders and vertex programs (via Microsoft’s OpenGL binding). In 2001 Microsoft DirectX 6 came out with next-gen hardware support for Direct3D.
1. The Next-Gen is Now
The current generation of hardware is what we call “current.” “Next-gen” has been delayed by the PlayStation 4 and Xbox One. This has left a large gap between the last system and this one, leaving developers to cram everything they can into the existing systems to keep them going. But all that time spent on development means that developers are now focused primarily on optimizing their games. And it looks like it’s paying off, because games have never looked so good before.
2. Uses JIT
Just-in-time compilation is a technique where the game compiles code when it runs and injects it into the currently running context. Gamers have been able to experience this for years due to the increased use of dynamic compression — a technique which allows developers to make code smaller, thereby increasing its speed and decreasing processing time on various components. This is all thanks to new platforms, such as DirectX 9.1, which uses the technique. Developers are no longer constrained by the hardware they are using, but can take advantage of new optimizations when they come out.
3. Virtual Reality
One of the downsides to this approach is that it creates a bottleneck in the performance of the system. It takes longer for developers to figure out what code they need to trim down, so they have more time to do it. The new generation might be delayed, but with it comes some very interesting features that could change how we game.
The biggest one is virtual reality — which finally seems like its just around the corner now that Oculus has a release window for its developer kit. All of a sudden, you can use your PC as an immersive display for your games and not just its graphics card. Already gaming is quickly becoming immersive, but at this point virtual reality will soon become even more immersive.
4. The First Phone to Cost a Dollar
One of the biggest developments in the mobile market right now is the next generation’s smartphone. It’s a bit of an overstatement to say that a smartphone will cost one dollar by 2018, but it’s hard to argue with its credibility as a trendsetter when companies make devices like the One Laptop Per Child XO-1 with smartphones that cost less than $100 each (and run Android).
Beyond price, we are finally getting phones powered by next-generation mobile processors like ARM Cortex-A53. The current generation is dominated by the energy-hungry Cortex-A15, which uses up to 35 times more energy than the Cortex-A9. The A58 is an even better processor than that, with several times the performance per watt.
The Oculus Rift could be one of the biggest developments in AR/VR for a long time to come. This isn’t even close to a consumer product yet and is instead being used for medical applications (tracking patient wellness) and military research (using it as a 3D sensor). But in the future, we might see virtual reality become more commonplace, and it’s likely that we’ll use a headset like the Rift for extended periods of time.
An early example of this is using VR to simulate an entire plane inside of a computer — an intriguing idea that was laid out in 1965 by computer scientist Gregory J. Trimble (who is well remembered as one of the first to propose the portable calculator). The first use of this would likely be for training purposes when you can have simulations to go over different scenarios in addition to watching movies or playing games.