Monday, June 28, 2010

Gathering feedback: 5 Major Challenges in Interactive Rendering

One of the talks I'm doing for the Beyond Programmable Shading course at Siggraph 2010 is about 5 Major Challenges in Interactive Rendering. And to be able to give good examples of major challenges we have going forward for the next 5 or even 10 years, I thought it could be a good idea to gather some feedback & content from the internets.

In other words, I'm lazy and crowdsourcing parts of the presentation :)

Getting broad feedback would be very useful to get a better understanding of how we as an industry would like to go forward and what problems & challenges are the most important to solve first.

If you are a game developer or working with interactive rendering in any other field, I would very much appreciate any feedback you have on the topic and specific challenges you would like to solve in the next 5-10 years and why those challenges are important for you, and I'll try to include that in the talk.

If you also have ideas of how it should be solved or pictures/movies that clearly demonstrate what you would like to achieve - that would be even better!

The challenges can be anything from rather small things such as the lack of a programmable blending stage that makes rendering decals together with Deferred Shading more difficult, expensive and awkward.

Or things like the major challenge of doing full-scene dynamic glossy reflections in arbitrary environments which can't be done accurately & generally today at all and where different types of raytracing may or may not be the way forward in the long term (with their own set of issues and problems of course).



Feedback about challenges is also not limited to only 'rendering features' per say, for example another key challenge is to develop great programming models for these massive data-parallel machines to empower developers broadly & utilize future hardware efficiently.

If you have any feedback - please either post in the comments here, tweet to me at @repi or mail comments/feedback/examples to me at repi (at) dice.se.

Thanks!

10 comments:

Andrew Richards said...

In terms of rendering technologies, I think ray-tracing has a part to play, but only in some cases. Procedural rendering would be nice, but no-one seems to have enabled it technically. Totally deformable meshes, too. I'm particularly interested in fluids: there's an interesting problem that you can calculate fluid particles in parallel quite easily, but can't produce the polygonal mesh that covers them so easily, which means you get lots of spheres for fluids instead of something that looks like water.

On a technical level, power consumption is going to become a problem that the programmer will have to think about, but doesn't have to think about currently. And the memory bandwidth problem will get worse. And there will have to be a mix of data-parallel programming and non-data-parallel programming, which means a mixture of cores. Programmers will have to think about the different kinds of memory in the system and have the right kind of data in the right kind of memory. Lots of things programmers don't have to think about now. Memory sharing between CPU and GPU becomes important (and Fusion and others will enable that) but a harder problem than you might hope. It's necessary for ray-tracing and other graphics effects, because you need to have the whole scene on the GPU.

Steve said...

I still view shadows as an unsolved problem. Shadow buffers just aren't that great - they are expensive to render and project for any more than a few lights in the scene. Shadow buffers are almost always oversampled or undersampled because of the non-linearity of the mapping between shadow texels and screen pixels. Perspective-based shadow buffers introduce as many problems as they solve. And forget about shadowing on indirect light.

As far as what is coming for shadows -- this is an area ray tracing could help, if the performance is there (I'm skeptical).

Imperfect Shadow Maps looks promising for indirect light. Crytek's light propagation volumes are another technique for indirect shadowing, but at a quality cost (very boxy shadows).

There was a paper a while back, which I can't find, which virtualized the shadow buffer, and filled in texture pages of it on demand, at the resolution necessary to get a perfect texel/pixel ratio. It required a larrabee-like architecture to be performant, though, and given the trouble that architecture has had, maybe it wouldn't be practical.

Rex Guo said...

Vince: are you referring to Irregular Z-buffer / Shadow-mapping using Larrabee: http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

Rex Guo said...

or Alias-Free Shadow Map that also requires a flexible memory model supporting irregular grid: http://www.tml.hut.fi/~timo/publications/aila2004egsr_paper.pdf

Unknown said...

Could also be thinking of Resolution-Matched Shadow Maps (by Aaron Lefohn) which builds a quad-tree page table of the required shadow samples and generates the required pages at the proper LODs.

DEADC0DE said...

- micropolygons. Quads do not quite cut it anymore.

- fast iteration. C++ compile/link/die sucks. We need something live-editable, live-swappable and checked. OpenCL or so...

- lighting. We still have only a few, very ad-hoc ways of solving the rendering equation. We can integrate only a few kind of lights against a few material models, and we can compute a decent occlusion only for even less of those. We need more research

- algorithms & data structures. We need more flexible data structures on the GPU, more general algorithms

repi said...

Thanks for the feedback guys, agree with all of it! :)

Tyler Woods said...
This comment has been removed by a blog administrator.
Anonymous said...

How about he ability to scatter writes on the GPU. Good motion blur at last.. :)

Otherwise, it's felt recently like we're so nearly there. GPU flexibility has improved so much - to the point where there's not that much that the hardware actually completely prevents. It's flexible enough now to bend it into what you want to do, especially since Compute/CUDA/OpenCL came long. It's just the bandwidth and performance isnt quite there - things dont scale up to practical applications yet.

Many of the things we want to do in the next 5-10 years are the same things we wanted to do the last few years. Fluids, global illumination, indirect lighting, scattering through transparent materials.. achieving the render fidelity that offline takes for granted. It feels like we've moved somewhere on a lot of those already but they still aren't practical for a proper application, or the "solution" reached so far is a temporary hack that will be replaced when the means allow it (SSAO..).

Most of those things on the wishlist imply raytracing, and thats one big challenge - combining raytracing and rasterisation efficiently. Being able to manage a representation of a dynamic scene for rasterisation and also have a representation that can be raytraced efficiently so we can use it for lighting and so on.. and to have the bandwidth/performance to do that raytracing. We've got some of the way already, but usually the representation is something in a volume texture - e.g. a signed distance field - and thats too memory-intensive and slow to update for large practical applications.

Hopefully we're not far away. :)

Unknown said...

I see the big challenges at a broader systems scale - the skyrocketing costs of asset creation (look how many names are listed in visual effects credits of films today!) are driven by both tech and by audience appetite - and the proliferation of platforms (& markets) does not abate. Products like "Just Cause 2," "Farmville," and YouTube all have potential for overlap and interaction. Innovation in rendering and graphics is not just at the high end for a single user, but also can be across many users on many platforms.