1) Why does everyone release their music at the loudest possible volume these days?

2) Why do tracker guys leave multiple instruments on the same note at the same time so you get this unintentional spike in volume? Just fix it with a super-quick delay (ED1 command).

nitro2k01 edit: Made the title overly clear.

Last edited by nitro2k01 (May 12, 2015 6:28 am)

1) Loudness War
https://en.wikipedia.org/wiki/Loudness_war

2) Naivete/Loudness War

1) BECAUSE IF ITS NOT THE LOUDEST THING EVER RELEASED IT'S SIMPLY NOT WORTH RELEASING. But really though, the problem isn't the volume, it's the compression. Mastering at -1 or 0db is fine as long as you don't squash the file so EVERYTHING hits at 0db. Take a listen at Jredd & Groovemaster's latest album (although a bandcamp stream really isn't going to be all that amazing...you should definitely buy it! plug plug) for which I did the mastering. It's mastered at 0db, but with gentle compression so only the loud peaks hit 0db. The dynamics are still there, the songs breathe in and out instead of just spitting all their instruments at you.

2) Small delays like that will introduce phasing/flanging. Sometimes people double their instruments to gain volume past what a single instrument can do. It *can* be a solution sometimes when you're composing with the tracker as an instrument (ie, using the sound mixing engine's artifacts to create textures that cannot be done when keeping volumes in check) but in 99% of all cases it's a dumb-as-fuck solution to a problem that is much MUCH better addressed by lowering the volume of everything else around it.

1) supposedly louder sounds better with all other things being equal which is what has precipitated the so called loudness war (wikipedia it!). also people want to crank sh*t up in the car while on the motorway or whatever which is why top40 chart pop gets the brickwall limiter treatment. i'm sure you know this already though and are just asking for people to turn it down a bit. which is fine smile maybe they should. personally i am not sure why chip people are so big on mastering stuff at all when all the music sounds just fine (or just bad) without it / played straight from the tracker. secretly i think like a lot of things we as chipsceners do it is to legitimize our stuff as "real music"

i have no real evidence for that but i know a lot of people are trying to distance what they do from videogames / 1990s finnish teenage nerd culture and maybe the "recording all the tracks separately into logic and then using plugins that you have to pay for to make them louder and more bassy" is a step on that path along with the having it put on vinyls and selling it for real money aspect

that is probably a bit cynical. the technical aspect of music production is quite enthralling and i bet some people just find it addictive or maybe even theraputic to do something clinical and scientific like that at the end of the artistic process. personally i'm crap at it which explains both why i don't do it and why i'm bitter about people who do

2) probably they are doing copy paste Compo Echo and are too lazy to fix it or maybe they just don't care? recently i like to use a fine pitch bend to separate out notes like that because skronky detune has always been lovely to me but the delay tip seems good too. mostly i just don't do compo echo, it's a cheap trick and not particularly artistic imo, although i am sure you can find loads of my songs where i've done it and probably even done the thing you complain about

Feryl wrote:

Just fix it with a super-quick delay (ED1 command).

reminds me of a steve brule solution

Huh, I never thought about putting a delay on the notes for #2. If we're talking channel echoes I usually just halve the volumes of both notes or turn down the volume for the echo note until the problem disappears, although I know there's times where I've forgotten to fix it or just plain didn't notice (like on one of the tracks on an album I just released, gah).

To clarify, I knew about the loudness war. I was talking about stuff that gets released here.

I remember one example of delay causing the phase effect for me, but it worked out great for the song, so I kept it (Last Year). Otherwise, I find it usually works. Sometimes I'll do it for individual notes on a channel. Reducing the volume is another method, but you may have to overcompensate for that.

Sandneil, it does appear that people just don't care. This trend sticks out in some of my favorite songs... Beakortolaris by Cerror and Fuelship by Syphus. I've been careful to avoid it for a while.

Also its the way our stupid monkey ears are made:
https://en.wikipedia.org/wiki/Fletcher% … son_curves

I don't see what the problem is as long as it's not clipping

It's annoying / makes random parts unpleasantly loud?

Also it can be a byproduct of the mastering, maybe it sounds nice on their speakers? Maybe your player is less accurate?

herr_prof wrote:

Also it can be a byproduct of the mastering, maybe it sounds nice on their speakers? Maybe your player is less accurate?

There's no mastering in this case; I'm talking about XM files. It's the way they were tracked.

Yea but whose to say its tracked on the same listening environment you are listening on. Maybe their computer speakers sound way different than your headphones or whatever.

I am incessantly told "your releases are too fucking quiet bro" by everyone ever, like a song is at its ideal volume when your volume dial is at 1% and it's already too loud