Latest Entries »

With this post I wanted to write down some of my thoughts about the nature of consciousness. It’s not meant to be the ground truth but rather a hypothesis.

I’m not going to tell all aspects about it as this would take too long and already the very definition of consciousness varies greatly between people. But let’s consider the part which we deal with everyday: We have the impression that our conscious mind is somehow supervising/controlling our “automatic” behaviour and is somewhat seperate from it. To make a long story short I believe all of this is an illusion and I want to explain why.

Let’s just consider that the brain is a sort of “machine” which senses its environment by connected sensors of different type (see, hear, touch, smell, taste) and produces behaviour via connected muscles. This “machine” is able to adapt in such a way that it produces behaviour (output) from senses (input) in order to survive and reproduce (which evolved by means of evolutionary pressure).
It adapts by recognizing repeating patterns in the world and behaving in a way which has been beneficial in this situation in the past while avoiding behaviors which were disadvantageous. Being able to recognize repeating patterns also means being able to somewhat predict the future as you may already recognize the beginning of some sequence and simply expect that sequence to evolve the same as it did all the times you experienced it before. Of course there are plenty of details to this, but this is another story.

Now what about consciousness ? I think it is just a consequence of this way of processing. First of all we experience our own behaviour. Let’s assume there is no such thing as a conscious mind. We just learned what is best to do in the situation we’re currently in. We do something and at the same time sense that we’re doing it in a feedback manner – even as infants which are certainly not conscious in the common sense and their behavior is far from optimal, but their behaviour get’s refined every time they do another trial to figure out what works and what doesn’t. As a side note it is worth mentioning that the sensory feedback loop is not the only feedback in higher animals including humans, but these more advanced brains also consist of internal neural feedback loops in order to keep “context”, so something that is out of sight is not immediately out of mind which greatly enhances predictive capabilities.
What happens over time is that we can predict our own behavior in familiar situations just as we can predict the situation itself. Basically our behaviour is just part of the situation (especially for newborns which are not aware that e.g. their arm is part of them – for them it’s not more special than the toy beside it, instead this is learned later on).
And here is the catch: What we call consciousness is simply the ability to predict our own behavior which gives us the illusion of actually causing or being in control of that behaviour which is actually not the case. Consciousness is not necessary for behaviour, but a consequence, a side-effect of how that behaviour is aquired by means of prediction from past experiences and feedback – a completely overrated illusion (at least in that being-in-control-defintion used here).
Usually this illusion is pretty much perfect, but in uncommon situations it sometimes becomes more evident as we simply cannot predict our behavior accurately anymore and in extreme situations it may even yield in a “conscious breakdown” – not being able to predict oneself anymore and therefore completely “loosing control and/or the sense of self” (feeling like driven by an auto-pilot). Fortunately most people will never experience such extreme situations, but the same effect can be produced by interferring with that self-prediction by various methods like drugs, sensory deprivation or brain-damage. All in common are impairment of memory, sensing or prediction capabilities which impairs the prediction of oneself’s behavior giving the perception of loosing control or not being oneself anymore (altered states of consciousness). In the extreme case of being completely incapable of predicting our own actions the ego dissolves out of existence.

I’m pretty sure that the idea that consciousness is needed for normal behaviour is wrong. In a way it’s the other way around. It’s more that not being conscious is a sign of an impaired brain-function, but still an unneccessary illusion. First there was behavior from prediction and sensory feedback and as a result there was the ego- and consciousness-illusion.

But if you now think this means all your “conscious efforts” are futile, because you really are just a machine, you didn’t get the point. 😉

It is part of my interest to understand the quantum world in a less abstract and weired way than it is currently situated in. I believe there is a more “newtonian” concept hidden in the weiredness of quantum mechanics, so I came up with this frictionless fluid model to describe the aether (the void of empty space). In this model structures (sort of vortices) can form appearing to us as “elementary particles” with their respective properties (like charge etc).

This simulation shows the evolution of an initial disturbance (in momentum) in an experimental aether. It is mostly axially symmetric because the initial conditions are, so it is easier to follow what is happening. Actually the initial disturbance is slightly off-center (0.1% of the simulation cube length) which becomes apparent towards the end of this video – I did this to show the extreme sensitivity to initial conditions.

Here you can see it in action.

After thinking about how our universe might work at the lowest level I got the idea that what we call “particles” are nothing but vortices in a frictionless fluid, so I made a simulation to test this idea.

So this is a simulation of an aether (1920×1080 grid; toroidal topology/wrapping) with an initial disturbance leading to the temporary creation of particles/anti-particles (aka vorticies), their annihilation and energy radiation. This pictures an intuitive solution to the wave/particle-duality, conversion between these 2 forms, why particles cannot be pinned down to an exact position (uncertainty), bremsstrahlung and a mechanism for “borrowing energy from empty space for brief periods of time”. Particles are accelerated/decelerated by waves of energy while repelling other particles and attracting their anti-particles.

The waves would actually travel at the speed of light, so this simulation is a super-slowmotion of what might actually happen at the smallest scale of our universe.

Colors and arrows indicate “direction of flow of free space”, brightness indicates pressure and white regions indicate vortex/particle centers.


Current algorithm used to extract sparse features as shown in the previous post.

Extracted sparse features

My current online feature abstraction seems to work fine now. After feeding the system a 30 minute video once it abstracts features (or “components”) of the input by changing each virtual neurons synapses (right plot in image). This is done in a way which yield a sparse and quasi-binary neural response to the input (“sparse” to decorrelate the mixed signals and “quasi-binary” to be tolerant to noise).

The reconstruction shows that all essential information is preserved and can be reconstructed from the sparse neural response.

In science fiction we often see highly sophisticated self-learning robots, yet most of them completely lack emotions. I believe that even theoretically it won’t be possible to create a robot which can learn to behave in a physical world without having some sort of emotions. But I’m sure emotions aren’t some mystical force, so it can be explained somehow.

Here’s my attempt:
I believe emotions cannot all be treated the same way as some emotions are built on top of other, more low-level emotions. At the lowest level there are only two emotions necessary to give a specific situation informative value and basic judgement: positive/good and negative/bad.

Positive emotions will modify the behaviour in order to increase the chance to reproduce (steering towards) the situation and a negative emotion will increase the chance to avoid (steering away from) the situation.
At first, what is considered good and bad is genetically pre-wired in an individual to maximize chance of survival and reproduction and consists of basic concepts like “physical pain incl hunger etc is bad” or “curiosity, family-bond and sexual attraction etc is good”.
This basic system is implemented in all learning organisms, probably even down to bacteria which learn to avoid poisenous substances.
This also means that any learning system will only learn what is emotionally significant, which is necessary to give importance to the nearly unlimited number of possible learning-options and situations. This means there can be no learning system (including future robots) without these low-level emotions.

Of course our brains work on a bit more complex level: As newborns will behave after the above simple principle, adults are not only experiencing these two basic emotions and also act emotionally not only to the pre-wired situations. The latter is due to the fact, that the brain learns to predict future situations from current and past situations and associates to those the emotional value of the predicted future situation. On an abstract level this means that I’ve associated “buying groceries” to “being good when hungry”, because I’ve learned the connection that I can buy food there and food is good against hunger. So situations “on the way” to a good or bad emotion will be associated with these as well. This prediction of the future will also cause one new emotion to arise: Surprise. It is quite a strong, but usually short emotion and its purpose is to correct a wrong prediction. If I have learned that a ball always rolls down a slope and now I make the observation that it doesn’t I’d be quite surprised. That’s usually how jokes work, btw – guide prediction in a certain way and then counter with something quite opposing.

So what about all the other emotions like joy, fear, anger, grief and so on ? I believe they arise from the prediction of certain low-level emotions and are therefore emergent from these low-level emotions. For example joy emerges when we get into a situation in which we predict something good to happen. Opposingly fear emerges when we predict something bad to happen. Anger will emerge when we predict something good to happen, but realize that we cannot reach that situation because of some obstacle. Grief will emerge when we realize that something good is not achievable anymore (e.g. a loved person died). Also the brain will create emotion-specific behavioural pathways which help us dealing best with a situation when being in a certain emotional state.

All emotions are transitional, which means that we cannot sustain any one emotion for long periods of time. This is necessary to avoid getting stuck in any emotion, though it still happens to some people in which this system is malfunctioning or which have been traumatized (sort of over-learning due to an excessive emotion).

I tried a new way of rendering voxel datasets. This first iteration is just a proof of concept which appears very promising.
The goal I had in mind when starting this project was to be able to render large voxel based game scenes at realtime performance. My approach consists of automatically converting the original voxel data into a set of heightmaps which are structured in space using a bounding-volume-hierarchy (BVH-tree). The actual rendering is done using ray-tracing.
It is still vastly unoptimized, but runs at >60fps already on a single GPU (at a output resolution of 1024^2).

The following rendering shows a fairly simple scene (data consists of about 17mio voxels before conversion).
The differently colored patches visualize the differnet heightmaps used.
voxel sculpture

I’ve added some of my older projects.
For some of them I can only give some rough technical information for now (as much as I can remember/have time for), but don’t hesitate to ask for more details if you’re interested and I’ll take a look into it again.

I hope you enjoy it… and leave some comments if you have any 🙂

This is my first draft of a particle based fluid simulation implemented using CUDA.

It displays 2 non-mixing liquids with different densities, lets say oil (yellow) and water (blue). From time to time I apply some push-force somewhere in the scene to cause some action. You can see pressure waves running through the fluid, waves and breakers appearing at the surface as well as drops and filaments forming and other surface tension effects.

The simulation consists of 65536 particles and each of it interacts only with its local neighborhood. By exchanging appropriate forces all of them together will form the fluid. All the phenomena written above are not programmed explicitly, but emerge from local particle/particle interactions. The simulation runs at 60 fps on a GeForce285GTX.

The video plays back at half tempo because it is set up to 30fps where the original rendering was 60fps.

My plan is to add (particle based) rigid bodies as well and to perform finite element analysis on them so they can break apart if external forces exceed the bonding forces.

This is my realtime additive synthesizer with up to 512 oscillators per voice and as much voices as your hardware can handle. You can modulate virtually any parameter by 2 of many modulators (6 N-breakpoint envelopes with adjustable slopes, 4 syncable sequencers, 8 syncable LFOs/voice, 4 syncable LFOs/channel)
It runs as a VST-plugin under Win32 systems.
Due to its additive synthesis and nice design 😉 you can easily create rich, smooth and warm ambient sounds, sharp and crisp electronic sounds as well as realistic natural sound or some mixture of any of these. Basically this thing can produce virtually any sound imaginable. You can even let Additizer analyze a audio file and all parameters are set up automatically to reproduce that audio file as good as possible (Additizer is NOT sample-based; you can freely change the analyzed sound as usual).
Even for very simple waveforms like pulse-waves it may be useful as any sound produced by Additizer is virtually alias-free which is important for high fidelity off sharp and harsh sounds (but in some situations this would be like taking a sledgehammer to crack a nut).

But listen for yourself ! All sounds are generated purely by Additizer with no post-processing applied. Ah, and every sample here is just a single voice held for a few seconds (only using 128 out of 512 available oscillators per voice).
Piano – analyzed and synthesized
Drone – from scratch
Bell Pad – from scratch
Feedback Pad – from scratch
Electro Whalesing – from scratch

As I havn’t continued this project for a long time and it is questionable if I will do so one day, I decided to put up the current unfinished but mostly working version. It can be downloaded here. Just drop it in your VST directory.
Feedback welcome. Have fun !