Protip: If you want to keep track of mentions of you, don't name yourself like a Bohr Magneton
That said, I'm ok with my handle. I like that it's ambiguous (as your ???).
I dunno. I usually name my tracks after snippets of browser fallout I stumble across. If you want something unique, aiming for 0 google hits would be the way to go.
1.) First contact as a listener was the Last Ninja soundtrack on C64. As a creator, I started sketching out songs for my band with an Adlib card. I very quickly fell in love with the crazy sounds FM can produce, and started to make little songs specifically for the Adlib. Later I switched to sample trackers (ScreamTracker/FastTracker2).
2.) Music which uses simple waveform instruments, techniques and modulation types, reminiscent of or created by basic synthesis chips as sole or defining elements.
3.) Rob Hubbard - Monty on the Run (C64 version). It's a very iconic track, uses the SID (which in my very personal opinion is the chip that kick-started chiptunes), has a catchy melody, and a very nice spectrum of instruments (demonstrating the width of sonic possibilities with simple waveforms and basic modulation/filters).
4.) Any instrument that allows to produce music as described in 2.) will do. Pocket Calculators, bent toys, samplers... however, the in my opinion most iconic would be the C64 for the first, the Amiga 500 for the second, and the Gameboy for the third wave of chiptune popularity.
zan-zan, even the anonymity of the bands you mentioned is part of their persona (ie, they too put up a play to make themselves stand out). The internet is a blessing and curse for this, too; while allowing to easily share your music, the competition has gone way up. You'll need some PR if you want any kind of exposure.
Question for those who prefer EQ before compression: My reasoning for doing the other way around is that the compression would change the dynamics of the frequencies and kinda screws with the +/- you assign to them with an EQ. So wouldn't it be better to use the altered (=new) spectrum and work from there? This may be because I tend to prefer Limiters in instrument chains which have a higher ratio/harder knee (and thus result in flatter outputs). That may actually also be a bad habit, feel free to advise me to do otherwise.
Well, it depends what kind of effect you want. For example Reverb->Chorus can create some interesting effects.
In general, I tend towards source->modulation->stereo->dynamics->distortion->eq->delay->verb(->mix)(->master chain)
I put filters wherever I want the frequencies modified, which depends on the kind of sound I want to get.
Putting EQ behind compressors/limiters has two advantages, imo: You can easily assign your freq profile, because the comp/lim catches the spikes and you get dynamics back (avoiding brickwalls).
I also have no problem with multiple EQ stages. In fact, my usual approach is to put a rough (3 band) EQ on each chain, then mix. My master chain is usually just a multiband compressor and a 16 band EQ behind that.
I've picked up work on Nautilus again. My goal for it is to create a usable VSTi platform for writing 1-bit tracks with an instrument editor that allows for flexible waveform manipulation, so each voice can have its own character. I'd also like to stack a lot of channels on top of each other and still keep it listenable. The basic idea is to have something to simulate a beeper in a DAW, but also allows you to go beyond that.
Although it worked ok-ish, there were some things I wasn't satisfied with.
The build allowed for about 4 or 5 simultaneous voices before it degraded to noise. Since I have the power of a DAW behind it, I wished for something more; how cool would it be to have 15 voices and drums play in 1-bit and it'd still sound good! This is tricky. The nature of 1-bit waveforms makes stacking voices complicated. You either end up with a complete ON state, or noise if you make every other voice substractive. Also, I want to maintain the instrument character of each voice, which includes PWM; once you have many voices, the PW of each has to be reduced, which leaves less space for that and thins the sound. I started to experiment with dynamic PW size algorithms, depending on pitch, number of voices, total average of ON states and so on.
This took a lot of time. I tested various setups for quality of sound, based on binary logic, subtractive algorithms, MUXing.. you name it. Eventually I grew frustrated and moved on to other things. Recently, I hit a slump with writing music and started to play with Synthedit again. By chance I stumbled over a OSC setup that created a pleasing dynamic PW on voices, at which point I decided to pick up work on Nautilus again. I made some modifications to the auto-arp to make it fit better.
Here's a mp3 render of a 11 channel MIDI. Multiple voices on the same channel are auto-arpeggiated. This test doesn't have instruments (no detunig, PWM, vibrato, portamento or anything). Keep in mind that the MIDI file wasn't made to be played like that, so there are a few annoying notes. The main test here was to have about 30 voices playing in 1-bit without having it dissolve beyond recognition. (Check your volume, it's a bit loud)
Next thing to do is to CPU optimize the core, and then find a workable interface solution for instruments.
Also, sorry for the blog post, but usually I keep better at projects once I made some kind of public commitment
Inspiring indeed. Incredible how the math translates into those patterns.
Lazerbeat wrote:
Forgive me but would someone be sweet enough to really dumb this down for me and explain it like im 5? It sounds awesome but I feel I am missing a bit of the awesome due to lack of background knowledge.
I'll try the basic stuff. Nitro's post will undoubtedly be more in-depth.
The output for these is 8bit / 8kHz. That means the amplitude range is from -127 to 127 (or 0 to 255) in decimal values. At 8kHz, I can put 8000 characters per second to the output. PCM encoding for 8bits means that the output value to the sound generator will be directly translated into an amplitude value.
The program is a simple loop which increases the t variable until it hits 65535 (assuming it's 16bits), then snaps back to 0. It only consists of a function which will put out a char value (8bits). Should the input equation be larger than 8bits, it will be shortened (truncated) to 8bits.
The clever thing here is to choose constants and operations in such a way that they'll modify t to generate a "pleasing" output, ie something that sounds pattern-y enough to pass as structured music. Which is awesome, because it gives a peek into how the math of something looks which our brain recognizes as "music".
µB: t is an int, 32 or 64 bits or so. The clever thing here is that they define t as the argument to main, which shaves away a few bytes of source code (Normally int t; or so). putchar also takes an int, but as far as I understand it, it truncates the output to 8 bits, which is what gets sent to stdout and then into /dev/audio.
Ah, ok. I was assuming that t is a char, and auto-truncates each time it exceeds 255. Is the binary shift circular in C?
The problem will not be with your mp3 encoder. I highly doubt the website checks the validity of the mp3, although a corrupt file will not play in the flash player.
At this point I'd wait for an admin to log on so he can check the server logs.