Category: My Projects


It is part of my interest to understand the quantum world in a less abstract and weired way than it is currently situated in. I believe there is a more “newtonian” concept hidden in the weiredness of quantum mechanics, so I came up with this frictionless fluid model to describe the aether (the void of empty space). In this model structures (sort of vortices) can form appearing to us as “elementary particles” with their respective properties (like charge etc).

This simulation shows the evolution of an initial disturbance (in momentum) in an experimental aether. It is mostly axially symmetric because the initial conditions are, so it is easier to follow what is happening. Actually the initial disturbance is slightly off-center (0.1% of the simulation cube length) which becomes apparent towards the end of this video – I did this to show the extreme sensitivity to initial conditions.

Here you can see it in action.

After thinking about how our universe might work at the lowest level I got the idea that what we call “particles” are nothing but vortices in a frictionless fluid, so I made a simulation to test this idea.

So this is a simulation of an aether (1920×1080 grid; toroidal topology/wrapping) with an initial disturbance leading to the temporary creation of particles/anti-particles (aka vorticies), their annihilation and energy radiation. This pictures an intuitive solution to the wave/particle-duality, conversion between these 2 forms, why particles cannot be pinned down to an exact position (uncertainty), bremsstrahlung and a mechanism for “borrowing energy from empty space for brief periods of time”. Particles are accelerated/decelerated by waves of energy while repelling other particles and attracting their anti-particles.

The waves would actually travel at the speed of light, so this simulation is a super-slowmotion of what might actually happen at the smallest scale of our universe.

Colors and arrows indicate “direction of flow of free space”, brightness indicates pressure and white regions indicate vortex/particle centers.

Algorithm

Current algorithm used to extract sparse features as shown in the previous post.

Extracted sparse features

My current online feature abstraction seems to work fine now. After feeding the system a 30 minute video once it abstracts features (or “components”) of the input by changing each virtual neurons synapses (right plot in image). This is done in a way which yield a sparse and quasi-binary neural response to the input (“sparse” to decorrelate the mixed signals and “quasi-binary” to be tolerant to noise).

The reconstruction shows that all essential information is preserved and can be reconstructed from the sparse neural response.

In science fiction we often see highly sophisticated self-learning robots, yet most of them completely lack emotions. I believe that even theoretically it won’t be possible to create a robot which can learn to behave in a physical world without having some sort of emotions. But I’m sure emotions aren’t some mystical force, so it can be explained somehow.

Here’s my attempt:
I believe emotions cannot all be treated the same way as some emotions are built on top of other, more low-level emotions. At the lowest level there are only two emotions necessary to give a specific situation informative value and basic judgement: positive/good and negative/bad.

Positive emotions will modify the behaviour in order to increase the chance to reproduce (steering towards) the situation and a negative emotion will increase the chance to avoid (steering away from) the situation.
At first, what is considered good and bad is genetically pre-wired in an individual to maximize chance of survival and reproduction and consists of basic concepts like “physical pain incl hunger etc is bad” or “curiosity, family-bond and sexual attraction etc is good”.
This basic system is implemented in all learning organisms, probably even down to bacteria which learn to avoid poisenous substances.
This also means that any learning system will only learn what is emotionally significant, which is necessary to give importance to the nearly unlimited number of possible learning-options and situations. This means there can be no learning system (including future robots) without these low-level emotions.

Of course our brains work on a bit more complex level: As newborns will behave after the above simple principle, adults are not only experiencing these two basic emotions and also act emotionally not only to the pre-wired situations. The latter is due to the fact, that the brain learns to predict future situations from current and past situations and associates to those the emotional value of the predicted future situation. On an abstract level this means that I’ve associated “buying groceries” to “being good when hungry”, because I’ve learned the connection that I can buy food there and food is good against hunger. So situations “on the way” to a good or bad emotion will be associated with these as well. This prediction of the future will also cause one new emotion to arise: Surprise. It is quite a strong, but usually short emotion and its purpose is to correct a wrong prediction. If I have learned that a ball always rolls down a slope and now I make the observation that it doesn’t I’d be quite surprised. That’s usually how jokes work, btw – guide prediction in a certain way and then counter with something quite opposing.

So what about all the other emotions like joy, fear, anger, grief and so on ? I believe they arise from the prediction of certain low-level emotions and are therefore emergent from these low-level emotions. For example joy emerges when we get into a situation in which we predict something good to happen. Opposingly fear emerges when we predict something bad to happen. Anger will emerge when we predict something good to happen, but realize that we cannot reach that situation because of some obstacle. Grief will emerge when we realize that something good is not achievable anymore (e.g. a loved person died). Also the brain will create emotion-specific behavioural pathways which help us dealing best with a situation when being in a certain emotional state.

All emotions are transitional, which means that we cannot sustain any one emotion for long periods of time. This is necessary to avoid getting stuck in any emotion, though it still happens to some people in which this system is malfunctioning or which have been traumatized (sort of over-learning due to an excessive emotion).

I tried a new way of rendering voxel datasets. This first iteration is just a proof of concept which appears very promising.
The goal I had in mind when starting this project was to be able to render large voxel based game scenes at realtime performance. My approach consists of automatically converting the original voxel data into a set of heightmaps which are structured in space using a bounding-volume-hierarchy (BVH-tree). The actual rendering is done using ray-tracing.
It is still vastly unoptimized, but runs at >60fps already on a single GPU (at a output resolution of 1024^2).

The following rendering shows a fairly simple scene (data consists of about 17mio voxels before conversion).
The differently colored patches visualize the differnet heightmaps used.
voxel sculpture

This is my first draft of a particle based fluid simulation implemented using CUDA.

It displays 2 non-mixing liquids with different densities, lets say oil (yellow) and water (blue). From time to time I apply some push-force somewhere in the scene to cause some action. You can see pressure waves running through the fluid, waves and breakers appearing at the surface as well as drops and filaments forming and other surface tension effects.

The simulation consists of 65536 particles and each of it interacts only with its local neighborhood. By exchanging appropriate forces all of them together will form the fluid. All the phenomena written above are not programmed explicitly, but emerge from local particle/particle interactions. The simulation runs at 60 fps on a GeForce285GTX.

The video plays back at half tempo because it is set up to 30fps where the original rendering was 60fps.

My plan is to add (particle based) rigid bodies as well and to perform finite element analysis on them so they can break apart if external forces exceed the bonding forces.

This is my realtime additive synthesizer with up to 512 oscillators per voice and as much voices as your hardware can handle. You can modulate virtually any parameter by 2 of many modulators (6 N-breakpoint envelopes with adjustable slopes, 4 syncable sequencers, 8 syncable LFOs/voice, 4 syncable LFOs/channel)
It runs as a VST-plugin under Win32 systems.
Due to its additive synthesis and nice design 😉 you can easily create rich, smooth and warm ambient sounds, sharp and crisp electronic sounds as well as realistic natural sound or some mixture of any of these. Basically this thing can produce virtually any sound imaginable. You can even let Additizer analyze a audio file and all parameters are set up automatically to reproduce that audio file as good as possible (Additizer is NOT sample-based; you can freely change the analyzed sound as usual).
Even for very simple waveforms like pulse-waves it may be useful as any sound produced by Additizer is virtually alias-free which is important for high fidelity off sharp and harsh sounds (but in some situations this would be like taking a sledgehammer to crack a nut).

But listen for yourself ! All sounds are generated purely by Additizer with no post-processing applied. Ah, and every sample here is just a single voice held for a few seconds (only using 128 out of 512 available oscillators per voice).
Piano – analyzed and synthesized
Drone – from scratch
Bell Pad – from scratch
Feedback Pad – from scratch
Electro Whalesing – from scratch

[EDIT]
As I havn’t continued this project for a long time and it is questionable if I will do so one day, I decided to put up the current unfinished but mostly working version. It can be downloaded here. Just drop it in your VST directory.
Feedback welcome. Have fun !

This was quite a lengthy project: A fully running Commodore Amiga500 emulator. It has all the necessary features to run most of the games from back in the days.
It is basically a simulation of the Amiga’s main-processor, co-processor, blitter, displaychips, soundchips, ROM, RAM, Disk drives, DMA and all those things.
It comes with a debugging panel and a simple disassembler as can be seen from the screen-shot. And yes: “Chuck Rock” runs like a charm, including the great music and everything… 😉

I think it’s about time for a round of “Pinball Dreams”…

This rendering was produced by evaluating the famous mandelbrot fractal, but instead of just counting the number of iterations for each pixel I wanted to draw the positions of the full sequence/trajectory (until bailout) of points from each starting point. Since the iterative function is quite chaotic the points within the sequence scatter all over the image in a very unpredictable fashion.
The challenge of rendering this image lies in the fact that when displaying a tiny region of the overall space only a tiny amount of points will fall in the view and billions of them will fall somewhere else where they don’t contribute to the final rendering at all. In order to not waste billions of iterations to get just a few rendered pixels I used the Metropolis/Hastings algorithm. This technique will “mutate” past trajectories (or paths) which did successfully fall in the view in order to produce (to a high chance) more points which will also fall in that view. This algorithm works well, because similar starting points will produce similar trajectories (but trajectories will diverge in some exponential fashion). This technique is also quite useful when computing global illumination of 3D scenes as this is sampling-wise quite a similar problem.