Category: Graphics


I tried a new way of rendering voxel datasets. This first iteration is just a proof of concept which appears very promising.
The goal I had in mind when starting this project was to be able to render large voxel based game scenes at realtime performance. My approach consists of automatically converting the original voxel data into a set of heightmaps which are structured in space using a bounding-volume-hierarchy (BVH-tree). The actual rendering is done using ray-tracing.
It is still vastly unoptimized, but runs at >60fps already on a single GPU (at a output resolution of 1024^2).

The following rendering shows a fairly simple scene (data consists of about 17mio voxels before conversion).
The differently colored patches visualize the differnet heightmaps used.
voxel sculpture

This rendering was produced by evaluating the famous mandelbrot fractal, but instead of just counting the number of iterations for each pixel I wanted to draw the positions of the full sequence/trajectory (until bailout) of points from each starting point. Since the iterative function is quite chaotic the points within the sequence scatter all over the image in a very unpredictable fashion.
The challenge of rendering this image lies in the fact that when displaying a tiny region of the overall space only a tiny amount of points will fall in the view and billions of them will fall somewhere else where they don’t contribute to the final rendering at all. In order to not waste billions of iterations to get just a few rendered pixels I used the Metropolis/Hastings algorithm. This technique will “mutate” past trajectories (or paths) which did successfully fall in the view in order to produce (to a high chance) more points which will also fall in that view. This algorithm works well, because similar starting points will produce similar trajectories (but trajectories will diverge in some exponential fashion). This technique is also quite useful when computing global illumination of 3D scenes as this is sampling-wise quite a similar problem.

This was an attempt to simulate the structure-forming process in the universe.
3 million particles were initially placed at random positions within a sphere of space so they appear as a shapeless, homogenous fog.
Now every particle interacts with all others by gravity. Just by that they would implode into a single dense region. To avoid this a “vacuum-force” is applied which pushes each particle outwards in proportion to its location from the center. This counter-acts the collapse caused by the gravitational field and instead the particles converge into this sponge-like pattern. Where filaments connect into dense regions would be the location of star-clusters (and within those you would see galaxies), but for each “star-cluster” there are only a few particles left (around 40), so you cannot see more detail than that with “just” 3 million particles.
The pattern actually matches quite well to serious super-computer calculations done by astrophysicist.
I did the simulation in 3D as well, but basically the same pattern emerges – just 3D (but its not so nicely visualizable and many more particles would be needed).


I was always fascinated by the visual beauty and mind-blowing force of large-scale explosions. And since nobody should get hurt, the computer was once again the weapon of choice.
In the upper image you can see the simulation view showing flow dynamics and the lower image shows a sequence of final renderings.
The simulation is done using grid-based Navier-Stokes fluid dynamics and additional Kolmogorov-turbulence to compute the dynamics of airflow.
Now plenty of particles can advect along this flow and are rendered each frame with a custom volume renderer.

I created this image by raytracing using a distance function approximation of the Mandelbulb fractal. The rendering of the fractal is 3 dimensional and it has a lot of depth which is not really visible, because the degree of detail is infinte at all levels and therefore it is impossible to extract depth-cues from the details.
For that reason I’ve extended the rendering to produce stereoscopic outputs which greatly enhances the appearence of the actual shape. It is quite stunning to browse through different regions in full stereo-3D.

You can download a hi-res anaglyph stereo-rendering here (note: left/right is swapped due to my oddly swapped glasses)

This video shows realtime raytracing of procedural volumetric data (warped metaballs) including reflection, shading, 2 lights, shadows and ambient-occlusion.
Computation is done in CUDA and executed on a GeForce285GTX. The video quality is not so great as something went wrong with the frame rate. I didn’t have any good grabber at hand.

This is a test of displaying VirtualReality on a regular LCD-monitor.
The simple 3D scene appears as a hologram on the screen. You can easily look around obstacles just as if the screen would be a window or portal instead of a flat surface showing an image.
It has been filmed with an iPod so the quality isn’t all that great and the colors distorted a little which yields stronger ghosting, but I think you can still clearly see the effect.

NOTE: My Red/Cyan 3D-glasses seem to be a little odd as they have the RED glass on the RIGHT eye. Usually this seems to be the other way around, so if your 3D glasses have the red glass on the left just wear them up-side-down for a proper 3d effect.