firebrandboy wrote:

I'm really looking about insight into process here. Did you take a probabilistic approach? Sonifying mathematical algorithm, etc?

When I played around with it, I made a mental map of what I wanted to achieve. Since generative music is basically coding for me, I used the same approach when designing programs.

You got your basic black box layout:
Input->Black Box->Output

The input in this case is a set of random (or pseudo-random) numbers. Choosing the random source is probably the most fun aspect, because you can get really funky with it: Radio static, space rays, fractals, the text of a book etc.

The output determines a few things about the black box: How structured should it be? Do I want 12-TET notes? Should it be slow or fast? And so forth.

For each synth in the array, the black box opens to this content:
Input->Quantizations->Parameters->Synth

I then decide on per-voice basis, what parameters I want to manipulate and which should stay static. This decides on the number of quantization functions I have to write for the instrument. Let's say:
- Trigger
- Envelope
- Note
- Amplitude

There's a number of way to approach this, dependent on your aesthetic preferences. For example, for a droney background pad the function for triggers doesn't need to be that strict when it comes to hitting a tick. In the same example, you'd probably want a generally long envelope, so the functions need to be interdependent (maybe sharing the same function with inverse output).
For notes you'll generally want a quantization that translates to harmonic steps.

If I want to have a global structure, like arrangement, chord changes, speed changes etc, there'll have to be functions which control the other functions accordingly. The quantization functions for notes would need an input parameter which harmony they need to shift to in case of a chord change, for example.

Of course, there's always the Just-Try-Shit-Out approach in which you slam a random source to some parameters and see what happens, knob twiddling your way to success. wink I've spent hours just exploring synthesis that way.

Edit: Here's a very strict example. Arrangement is fixed, and the only random aspect is the probability of each note trigger.
http://chipmusic.org/µb/music/yellowjacket

This may only be of interest to some of you, it's not exactly a chipmusic effect, but.. well, maybe someone will make good use of it.

I've had this WIP VST effect around with which I tried to add bad reception effects and distortion like on a SW radio on a signal. I recently used it in a track and thought maybe others would like it.
The emulation of the effects/filters/etc are not perfect, but sufficient I think. At least for some settings you can produce a pretty good approximation.

Controls are very simple. There's an on/off switch in the upper left corner. The small dial next to it controls the dry/wet mix.
For the actual effect, I've tied all effect setting to a single value, so you can explore settings for something nice. wink

The button row on top selects the 'band', which is of course just different effect offsets. The big knob on the right controls the 'station', which adds to the same effect offset, but in finer steps.

I recommend additional filtering of the signal before input, you can get nicer results like that.

This is what it sounds like with an old Billy Murray song smile


Link:
http://www.milkcrate.com.au/ub/RadioMoscow.zip

little-scale did some- at least, some algorithm based work. I remember his "Swarm" and stuff like his weird laser-waterbowl.

A few others here have dabbled with it. I once wanted to do a compilation, but that came to naught because I had to deal with RL shit then (sorry everyone).

There's also the one-liners viznut et al. made.

gijs has done some too I think.

100

(24 replies, posted in Audio Production)

Right, so I tried to put the lessons to use. If you can, ignore the composition- I did that mostly to make the task more difficult.
http://chipmusic.org/µb/music/milgrams-cat

It would be cool if you could point out any errors or dodgy mixing.

Also, kudos if you managed to get through the whole thing big_smile

101

(24 replies, posted in Audio Production)

That's good advice. I've actually done this in the past, when I mixed Sievert's album to make sure the mix is consistent.
What's your view on mix EQ vs master EQ? It's a lot easier to EQ the master if you go for a specific spectrum, but I guess making sure the mix is already good is the better way.

Currently I use 3 band EQ on each channel, and a 16 band EQ on master. I have difficulties controlling the sound that way, especially when compression is involved. Any advice on this?

Hey guys, I'd be interested in what your usual routine for checking whether your mix and master are good is. I usually let the new track lie for a night after I'm done and then use an oscilloscope / spectroscope to look at amplitude and frequencies for spikes. However, mostly I go by ear and try to get the sound I was aiming for.

Now, there are a couple of things I noticed I was unsatisfied with. For example, I have trouble making a track sound good on both headphones and speakers. Or when I listen to it along other tracks, I notice that the volume is off, it sounds dull in comparison or something like that.

I'm looking for advice on how to improve my mix/master verification process. How are you guys doing it? Do you do multiple check on different volume levels, are you mixing on speakers and mastering on headphones etc?

Hello! smile

I am a very happy owner of this:
http://truechiptilldeath.com/blog/2009/ … xygenstar/

105

(8 replies, posted in Releases)

Jellica wrote:

me and phil make videos but no one cares sad

patawic wrote:

The trouble is, i cant seem to get the arpeggio fast enough, its going faster then 128ths to the point where the vst plugin cant produce all of the beats sad

There are VSTi with built in arpeggio function. They can usually be set to fractions of ticks.
Take a look here: http://woolyss.com/chipmusic-plugins.php

ant1 wrote:

µB you are wrong, the samples are 8 bit. the six bit volume control is actually additional to the 8 bit samples. somehow using this, 14 bit sound playback is achieved even

akira^8GB wrote:

He's right, this is how the AHI stream driver for Paula works. You can even do 12 bit samples.

Ok, I'm confused now. Is there a doc I can read up on?

108

(36 replies, posted in Software & Plug-ins)

akira^8GB wrote:

Do you think recording at a higher volume, even maybe clipping it a bit, would help? How about amplifying by software? That'd make the noise worse I would tend to think...

Clipping would make things worse, actually. Although, approximating the sine closer to a square wave might help a little- the closer to a square, the less volume jumps you'll have, or in other words you're decreasing the time period between voltage flips. The downside of this is of course the sound will approxiamte that of a square as well. Post-amplifying would only increase the sound (including the aliasing), as you said. The restriction is inherit, and if it's as I think it is, you're stuck with 6bit samples in one channel. A quick check with google seem to confirm this (4x8bit sample channels with 6bit volume)

I don't know.. my next impulse would be to suggest interpolation of the output signal. This doesn't work well (as the interpolation should ideally be on each individual channel) and will dull the sound, akin to a low-pass. You might be able to polish up the sound a little with post EQ.


Edit: Just did a quick test mixing square and sine waves. The square becomes very apparent wit as little as 15% mix, and the effect on the aliasing is neglectable. hmm

Edit 2: Currently testing with sigmoid functions and results are somewhat more pleasing, (1/(1+exp(p*x)))-0.5 makes the sound more bell-like for increasing p, but the faster voltage change reduces the aliasing

109

(36 replies, posted in Software & Plug-ins)

I just thought of something that might explain the aliasing by PT. It's possible the sample volumes are reduced by 1/4 so the total output can't exceed 255. Which would mean you have 4x6bit tracks (64 levels per track)

akira^8GB wrote:

Protracker is aliasing the sound already somehow, but I just get too much noise when the volumes are low and the herz are low too. Some times I can't sample a louder bass sound... It's tricky.

I don't know if there's a way around that. The low bit resolution essentially converts your waveform into a pile of stacked square waves. Low notes will make that more apparent, since the amplitude changes happen slower, and each decrease in volume will pronounce it, because you're reducing the number of amplitude jumps. The only thing you can do about it is to increase the number of samples (ie decrease phases per time) which will help with the rate of amplitude jumps. There's no way around the volume problem I can see safe multitracking- running two synched machines with complementary waveforms and decrease output volume by half (simply running two copies won't do the trick since just doubling the output and then half it again does nil. You'd have to adjust levels so they'll add to 'between' steps)

I can't help with the aliasing PT adds by itself, I never worked with it. Do you sample/convert natively on the Amiga? Comparing the input with the PT output might give an idea what's going on.

Hm, if you can spare a channel, you may be able to increase resolution with a parallel signal which complements the first. Calculating the different waveforms to do that might be tricky, though.

Edit: Another option would be to slightly modulate the waveform (so it's not a pure sine). I usually try to do that for sine/tri basses so the aliasing 'fits' the bass sound (ie work with it, not against it)

112

(226 replies, posted in Software & Plug-ins)

komanderkin wrote:

part 2 has been released, 14 new instruments:

http://bedroomproducersblog.com/2012/04 … mple-pack/

It's so awesome to have you here! You may have noticed I've linked to BPB a few times before in this thread, thanks so much for making it and sharing!