[NBLUG/talk] Request for help--how to program shuffling

ME dugan at passwall.com
Tue May 20 19:59:00 PDT 2003


> On Tue, 20 May 2003, Steve Zimmerman wrote:
> Unfortunately, you simply cannot create "randomness" on a deterministic
> machine.  If a computer is working properly, it will spit out the same
> output every time you give it the same input.

If you want a simple method for generating pseudo random data from within
C/C++ without relying upon /dev/random , you can always try to seed your
random with a ref from time. Of course, this does not have the same level
of entropy as /dev/random which is able to use several sources. (Use of
/dev/random can make your code less portable.)

// Simple pseudo randomness:
#include <ctime>
#include <cstdlib>
#include <iostream>
void main()
{
	srand(time(0));
	cout << rand() << endl;
}

// Of course in Linux, re-running the above more than one time each second
//  interval of time(), will lead to the same predictable results as:

#include <cstdlib>
#include <iostream>
void main()
{
	srand(1);
	cout << rand() << endl;
}

// and the above's "repeatable randomness"

> If you want something to be "truly" random, you need a source of really
> random numbers.  /dev/random and /dev/urandom provide "random" numbers
> that the kernel creates using network traffic levels, hard drive
> statistics, and other things that "should" contain entropy.  I hear that
> atomic decay is another decent source for truly random numbers.

There are differerences between /dev/random and /dev/urandom though. (I
know Andru knows about these, as we talked about this a long time ago.)

When the entropy pool is exhausted, /dev/random will wait untill "enough"
entropy is generated before giving more output, wheil /dev/urandom just
keeps going. Cost of using /dev/random in linux-land is that a program
asking for too much randomness may enter a period of waiting while new
entropy is generated.

> I highly recommend reading the beginning of Knuth's chapter on random
> numbers.  He talks about his super random function that was terrible, and
> then explains why a pseudo random function chosen at random is likely to
> be lousy.  I believe that this same lesson is also applicable to one's
> choice of shuffling algorithm.  If you pick (or make up) one at random, it
> is likely to have biases.  If you use Fisher/Yates, at least you only need
> to worry about your random number source (which is hard enough).
>
> Several online casino's have had problems with their shuffling algorithm.
> In one particularly sad case that I heard of, once the player had seen a
> certain number of cards, he could deduce the order of the rest of the
> deck.  Because only a small percentage of the potential orders were
> actually possible, the player could figure out which one was currently in
> play.

And even some real-life casinos have been reported to have problems with
this. According to one story, some electronic gambling machines in LV, NV
did not have a good seeding for the gambling devices. As a result, when
each machine was powered down, and then  powered back up, they would
proceed to repeat the same result for the same inputs from the gambler.
Normally this would not be an issue, but guards were aparrently on order
to shut down all the devices and power them on again every day at around
the same time, and clever gamblers picked up on this. Word spread and an
investigation of why people would suddenly rush to the casino and win in
droves at the same hour each day. (Andru and I heard about the above story
at the same time a few years back.)

-ME




More information about the talk mailing list