Every gambler knows, or should know, how to calculate the odds or probability of his bets. It's simple arithmetic. Not rocket science. The problem is that all too often those calculations way off target. Not because the calculations are wrong, but because Random Chance cannot be accurately calculated. Which begs the question "How Probable is probability?"
"Very!" Says the mathematician, pointing to the "Law of Large Numbers".
"Unsatisfactory!" Says the luckless gambler. Whose calculations so often prove wrong.
We have been taught from earliest youth to regard maths as absolute. 2+2 must always equal 4. Never 3 or 5, or even 3.99 or 4.01. So while our math might be precise. The answer is to often wrong. We have to accept that Probability is NOT ABSOLUTE! Mathematicians know the figures won't compute, but accept a small "Fudge Factor" to make the calculation work.
To understand this we need to study the "Theory of Probability", and the mindset of the mathematicians who are trying to use it.
For most of Man's history, Fortune, good or bad, was a "gift of the gods". I have no doubt that many of the early gamblers could calculate the "Odds", but their calculations proved unreliable.(Damn those pesky Gods!) While gamblers bet one on one. This excuse for the unreliability of the calculations was acceptable. It was only when casinos began to appear, in the 16th century, that this concept became unacceptable. The casino owner had to KNOW that the advantage(HE), that he had built into his games, would stand the test of time. No Gods good or bad allowed! Scientists and mathematicians were approached for an answer. The concensus was positive. It was agreed that regardless of the variance or deviation along the way. In any large series of random trials every posibility would tend to occur in preportion to it's probability. In short the odds would "Average out" along the way. The larger the series of trials the closer the result would be to the expected average(Mean). In an infinite series, every posibility would ocur in exact preportion to it's probability.
This became known as the "Law of Averages", and was the basis of probability for 100 years.
When the French genius Pascal's attention was drawn to probability. He developed the theory that, "while the PERCENTAGE of deviation would DECREASE. The ACTUAL deviation tended to INCREASE." Jacob Bernoulli "proved" Pascal's theory, and issued a Theorem(proven theory) that stated:-
" In any series of EQUALLY DISTRIBUTED Random Trials,in which the individual trials were MUTUALLY EXCLUSIVE. The larger the number of trials the closer the percentage result would be to the expected (Theoretical) MEAN. Although the actual deviation would tend to get larger."
There are 2 points here that need clarification.
 EQUALLY DISTRIBUTED Random Trials. this means that each trial must be generated in the same way. You can't mix and match RGN's, wheels and coin tosses!
 Individual trials must be MUTUALLY EXCLUSIVE. This means that EACH TRIAL must be completely independent. Owing nothing to the past trials, and giving nothing to the future trials.
This became known as the "Law of Large Numbers", and is the basis of modern probability. The problem is that it seems the same as the "Law of Averages". Even today many punters don't recognize the difference.
On it's own, the law doesn't make a great deal of difference. It was up to a little known mathematician, Abram D' Moivre, to put probability firmly on the mathematical map! It could be said, "That Bernoulli qualified probability, D'Moivre quantyfied it!"
D'Moivre's Theorem, the basis of the central limit theorem, states:-
The extent to which the actual result will diverge from the theoretical expectation is a funtion of the square root of the number of trials. This divergence, known as the STANDARD DEVIATION can be calculated using the formula :-
SD = the square root of (n x p x q)
SD = Standard Deviation
n = Number of trials eg. for EC bets. SD = sq rt(n x 18/37 x 19/37)
p = positive Probability for 100 trials SD = sq rt(100 x 18/37 x19/37)
q = negative Probability = sq rt(24.98)
The theorem goes on to state:-
"That 68.3% of the time the divergence would be one SD or less. Either side of the MEAN.
"That 95% of the time the divergence would be 2 SD's or less. Either side of the MEAN.
"That 99.7% of the time the divergence would be 3 SD's or less. Either side of the MEAN.
"That only 0.3% of the time would the divergence exceed 3 SD's
Not only does this Theorem offer an explanation of "Regression Toward the Mean", But it allows us to roughly calculate, and assess the deviations, that are a common factor in any series of random trials.
It must be stressed that neither the Law of Large Numbers or the Central Limit Theorem is absolute. When a theory is "proved" both it's positive and negative aspects are included. The "fudge factor" that is a basic tenet of probability is still in force.
There is no way known to man to accurately calculate a probability! If you use the "Law of Large numbers" or SD's to calculate. About the best you can hope for is that 2/3rds of the time you might only be 1 SD off target. Much of the time you could be up to 2 SD's off target! How does this translate into figures? Not well for the punter! The number of trials is far too small for any degree of accuracy. The "fudge factor" is just too large.
There is one other factor that must be taken into account when working with short random trials. It is a theory, and no proof is offered. That is the 'RANDOM WALK THEORY" It is obvious that every trial in a series changes the percentage of deviation, and possibly it's DIRECTION. Unlike the SD the "random walk produces sharp zigzags in the short term, rather than the slower, average, waves of the SD. It is in the peaks and valleys of these short term zigzags that the punter will find the best chance of defeating probability.