Category: Simulations


It is part of my interest to understand the quantum world in a less abstract and weired way than it is currently situated in. I believe there is a more “newtonian” concept hidden in the weiredness of quantum mechanics, so I came up with this frictionless fluid model to describe the aether (the void of empty space). In this model structures (sort of vortices) can form appearing to us as “elementary particles” with their respective properties (like charge etc).

This simulation shows the evolution of an initial disturbance (in momentum) in an experimental aether. It is mostly axially symmetric because the initial conditions are, so it is easier to follow what is happening. Actually the initial disturbance is slightly off-center (0.1% of the simulation cube length) which becomes apparent towards the end of this video – I did this to show the extreme sensitivity to initial conditions.

Here you can see it in action.

After thinking about how our universe might work at the lowest level I got the idea that what we call “particles” are nothing but vortices in a frictionless fluid, so I made a simulation to test this idea.

So this is a simulation of an aether (1920×1080 grid; toroidal topology/wrapping) with an initial disturbance leading to the temporary creation of particles/anti-particles (aka vorticies), their annihilation and energy radiation. This pictures an intuitive solution to the wave/particle-duality, conversion between these 2 forms, why particles cannot be pinned down to an exact position (uncertainty), bremsstrahlung and a mechanism for “borrowing energy from empty space for brief periods of time”. Particles are accelerated/decelerated by waves of energy while repelling other particles and attracting their anti-particles.

The waves would actually travel at the speed of light, so this simulation is a super-slowmotion of what might actually happen at the smallest scale of our universe.

Colors and arrows indicate “direction of flow of free space”, brightness indicates pressure and white regions indicate vortex/particle centers.

This is my first draft of a particle based fluid simulation implemented using CUDA.

It displays 2 non-mixing liquids with different densities, lets say oil (yellow) and water (blue). From time to time I apply some push-force somewhere in the scene to cause some action. You can see pressure waves running through the fluid, waves and breakers appearing at the surface as well as drops and filaments forming and other surface tension effects.

The simulation consists of 65536 particles and each of it interacts only with its local neighborhood. By exchanging appropriate forces all of them together will form the fluid. All the phenomena written above are not programmed explicitly, but emerge from local particle/particle interactions. The simulation runs at 60 fps on a GeForce285GTX.

The video plays back at half tempo because it is set up to 30fps where the original rendering was 60fps.

My plan is to add (particle based) rigid bodies as well and to perform finite element analysis on them so they can break apart if external forces exceed the bonding forces.

This was an attempt to simulate the structure-forming process in the universe.
3 million particles were initially placed at random positions within a sphere of space so they appear as a shapeless, homogenous fog.
Now every particle interacts with all others by gravity. Just by that they would implode into a single dense region. To avoid this a “vacuum-force” is applied which pushes each particle outwards in proportion to its location from the center. This counter-acts the collapse caused by the gravitational field and instead the particles converge into this sponge-like pattern. Where filaments connect into dense regions would be the location of star-clusters (and within those you would see galaxies), but for each “star-cluster” there are only a few particles left (around 40), so you cannot see more detail than that with “just” 3 million particles.
The pattern actually matches quite well to serious super-computer calculations done by astrophysicist.
I did the simulation in 3D as well, but basically the same pattern emerges – just 3D (but its not so nicely visualizable and many more particles would be needed).


I was always fascinated by the visual beauty and mind-blowing force of large-scale explosions. And since nobody should get hurt, the computer was once again the weapon of choice.
In the upper image you can see the simulation view showing flow dynamics and the lower image shows a sequence of final renderings.
The simulation is done using grid-based Navier-Stokes fluid dynamics and additional Kolmogorov-turbulence to compute the dynamics of airflow.
Now plenty of particles can advect along this flow and are rendered each frame with a custom volume renderer.

This is part of my “artificial perception” project.

An image (16×16 pixels) is generated which consists of 8 horizontal and 8 vertical white bars with each being either shown or hidden by chance. This means there is a total of 2^16=65536 patterns possible. Additionally the generated image is degraded by adding 50% noise to make recognition harder (this ultimately increases the number of possible patterns to infinite).

(A) shows a small subset of these input patterns.

Now these patterns are presented one-by-one (no batch processing) to a system consisting of 20 artificial neurons (amount is arbitrarily choosable) and each neuron updates its synapses by my new learning-rule.
The idea is that the system “learns to understand” the pattern-generating process and instead of trying to remember all possible patterns (65536 without noise and close to infinity for the noisy ones) which is completely unfeasable due to size constraints it extracts each bar seperately (even tho they are almost never shown in isolation, but always in combination with other bars). It does so as every pattern experienced so far can be reconstructed from these (2*8) separate bars.

(B) shows that 4 of the 20 neurons will remain unspecialized while the other 16 will specialize towards one of the 2*8 bars.

Also as you can see each neuron’s specialization in (B) is mainly free of noise (and therefore the whole percept) even tho a great amount of noise was inherent in each presented pattern.

This results in a 16-bit code which is the optimal solution to represent all patterns in a compact manner.

Computational complexity is O(N²) where N is the number of neurons. Memory complexity is just O(N).
Complexity is NOT dependent on the amount of patterns shown (or possible).
This should be one of the fastest online learning methods for this test without storing or processing previously experienced patterns.