hehe, new here and paranoid.I'll try to explain better. I am terrible at explaining things, so bare with me!
The principle is simple.
1. Capture video from the web cam and change the video format. This is the first ffmpeg call.
2. Play makebelieve and modify it like audio. This is the sox call.
3. Turn it into a more standard video format. This is the second ffmpeg call.
4. Play it with mplayer.
5. Enjoy.
To transfer the data from each program, I use pipes. Each program reads from stdin and writes to stdout.
Let's look at the commands a bit closer now. Some of the arguments are unnecesary. This is a really rough hack. I still need to refine it as I learn more.
1. ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -f rawvideo -pix_fmt rgb32 -s 320x240 -
"-f video4linux2 -s 320x240 -i /dev/video0 " reads from the webcam and makes the size 320x240.
"-f rawvideo -pix_fmt rgb32 -s 320x240" reformats the video to be raw RGB data. This means that there will be no header to mess up.
"-" write to standard out
2. sox -r 482170 -e u-law -t raw /dev/stdin -t raw /dev/stdout echo 0.8 0.88 60 0.4
Some of this is magic to me. We set the sample rate fairly high, tell sox to treat it like raw audio data, read from stdin, write to stdout, apply an echo effect. The arguments to the echo effect were copied and pasted from elsewhere.
3. ffmpeg -f rawvideo -s 320x240 -pix_fmt rgb32 -i - -f avi -
Convert from raw RGB data to an AVI. This step will be unnecisary if I can get mplayer to play the RGB data.
4. mplayer - -cache 150 -fps 60
Play it. I set the framerate higher than was recorded (30 on my system) so that any extra data that sox may produce from echoing will be read and not pile up.
5. Enjoy (and piss off a nice forum member, apparently)
Sorry!