Thoughts on photorealism
Since the first CG Images were created, a big part of 3D artists and many companies seek photo-realism in their renders. Algorithms like Raytracing, Global Illumination, Motion Blur were invented to simulate realistic light transport and other effects seen in reality.
As those got better and better over the years it became simpler then ever to create realistic looking outputs without much technical understanding on how these effects are actually simulated in the computer.
The image creation process is getting faster not just by the technology getting more simple but also there is more ready-made 3D content available online. Artist can easily pick and use 3D objects from an online library, find textures, materials or any assets they would like to have in their scenes. As an industry these tendencies are actually working in our favor, making our life easier.
The concept and the direction is great, but still things don’t just work out of the box. There is one big issue which most of the time is still not taken into consideration and causing issues when it comes to creating physically correct photo-realistic CGI images: the light!
While with 3D objects it is easy to point out where they have flaws or are off in scale by comparing them to their real counterparts, it is not so easy to do the same with lights. It is not objective enough. Different people will judge it differently.
A big part of the visualization market is focused on trying to replicate photography and make indistinguishable, hyper-realistic looking images. With today’s software and it’s real-time and interactive rendering capabilities this is becoming easier and easier, artists can more directly work on an image and see how the end result will look like. Playing around with the lighting of the scene and finding the proper suiting mood is a big part of these workflows.
But even as it is such a crucial part still nobody can objectively judge if the results are looking correct or not. Outcomes and images will widely vary and have a different look from one artist to another.
Of course we can always say, that “it looks good and the client was actually really happy with it”, but just imagine how much more effective a workflow could be if an artist would not reinvent the wheel with each new project and 3D scene.
In a multi-people environment if artists could easily exchange 3D objects and materials between different scenes without the possibility of them looking bad. If they could collaborate and work on a 3D scene without wondering what and how the other person exactly set up. Without the need of constantly adjusting all the small details when taking something from a library or from a different project.
With today’s technology this problem can be solved quite easily: if you want to get results which are looking identical to what happens in the real world, you only need to use all the proper, realistic information in your software. Done!
The tendency is visible that all the different software companies, content providers and the whole industry is moving into this direction, but it is still easier said than done. There are obviously a lot of settings that need to be taken into consideration and even worse: a lot of these settings are interconnected. Expecting an artist keeping track of all the switches and knobs and adjust all of them accordingly is basically an impossible task, especially in a production environment where time is a big factor.
What you need is: A System that is fed with scanned real world data and is rigged and wired using real world dependency functions. So it all comes down to 3 important blocks:
- Scan reality
- Simulate dependencies
- Bring values to the render engine
When all done correctly, an artist’s path to a good and photo-realistic looking image will get straight and the focus can be shifted to the artistic part of the process. No more technical decisions and guess-work needs to be done.
That is the mindset we have with Vermeer.
Cover image © John's Diner with John's Chevelle, painting by John Baeder