The rand() function in C gives your a pseudo-random number generator. To purists, it's got a lot of flaws, but I'm glossing over that for this post. In many cases, it is "good enough" to get the job done. A lot of times, I don't care that the distribution is not strictly even when you do something like.
int foo = rand() % 10;
Many times, just a rough approximation like above is enough to make something "feel random".
The rand() function uses a formula that calculates new "random" numbers based on a formula that includes the previously generated value(s).
So where do you get the "starting point" for the first number in the formula?
The standard C library maintains an internal state for the random number generator, and you can "seed" this state with the "srand(unsigned int)" function.
So what do you pass to it?
Well for any given "seed", you will generate the same sequence of pseudo-random numbers. For instance:
srand(42);
int a = rand() % 10;
int b = rand() % 10;
int c = rand() % 10;
will yield the same sequence for a,b, and c every time it is run. What if that is undesirable? It's almost like you need a random number to seed the random number generator, a "catch 22".
On desktops, seed values are often taken from some system time register, on Linux systems, it's sometimes generated from a timer that measures the time between a user typing keys.
On microcontrollers, you often don't have inputs, and system up-time timers don't work because the time will likely have the same value in the power up initialization sequence.
It can be difficult to get these miracle computing machines to be non-deterministic when you want them to.
A trick you may be able to use, depending on your setup is to use the ADC (analog digital convert) built in to most AVR's (on other brand) micros to read a voltage level on a pin that is "floating" or otherwise not tied well to a particular voltage. Here's a short example of how that looks on an AtTiny85:
#include <avr/io.h>
#include <stdlib.h>
void setup_seed()
{
unsigned char oldADMUX = ADMUX;
ADMUX |= _BV(MUX0); //choose ADC1 on PB2
ADCSRA |= _BV(ADPS2) |_BV(ADPS1) |_BV(ADPS0); //set prescaler to max value, 128
ADCSRA |= _BV(ADEN); //enable the ADC
ADCSRA |= _BV(ADSC);//start conversion
while (ADCSRA & _BV(ADSC)); //wait until the hardware clears the flag. Note semicolon!
unsigned char byte1 = ADCL;
ADCSRA |= _BV(ADSC);//start conversion
while (ADCSRA & _BV(ADSC)); //wait again note semicolon!
unsigned char byte2 = ADCL;
unsigned int seed = byte1 << 8 | byte2;
srand(seed);
ADCSRA &= ~_BV(ADEN); //disable ADC
ADMUX = oldADMUX;
}
In my case, PB2 was connected to a resistor, which was then connected through an LED to ground. When the pin is in a "high z" state (eg, not driven by the CPU), this approximates "floating" close enough to give me nice erratic values. Notice that I use the "low bits" of the ADC. The low bits represent smaller voltage differences, and exhibit greater variance, so they're more likly to swing a lot on a floating pin.
I'm not sure if it was really needed to scale the IO clock down by 128x, I just added it for flourish, thinking more time would mean more variance. I have no scientific evidence that is true though.
Enjoy!
--P