Author Topic: HOW PROBABLE IS PROBABILITY?  (Read 561 times)

Harryj

  • Veteran Member
  • ****
  • Posts: 353
  • Thanked: 156 times
  • Gender: Male
HOW PROBABLE IS PROBABILITY?
« on: September 28, 2016, 01:07:49 PM »

                     

   Every gambler knows, or should know, how to calculate the odds or probability of his bets. It's simple arithmetic. Not rocket science. The problem is that all too often those calculations way off target. Not because the calculations are wrong, but because Random Chance cannot be accurately calculated. Which begs the question "How Probable is probability?"
     "Very!" Says the mathematician, pointing to the "Law of Large Numbers".
     "Unsatisfactory!" Says the luckless gambler. Whose calculations so often prove wrong.

    We have been taught from earliest youth to regard maths as absolute. 2+2 must always equal 4. Never 3 or 5, or even 3.99 or 4.01. So while our math might be precise. The answer is to often wrong. We have to accept that Probability is NOT ABSOLUTE! Mathematicians know the figures won't compute, but accept a small "Fudge Factor" to make the calculation work.

    To understand this we need to study the "Theory of Probability", and the mindset of the mathematicians who are trying to use it.

    For most of Man's history, Fortune, good or bad, was a "gift of the gods". I have no doubt that many of the early gamblers could calculate the "Odds", but their calculations proved unreliable.(Damn those pesky Gods!) While gamblers bet one on one. This excuse for the unreliability of the calculations was acceptable. It was only when casinos began to appear, in the 16th century, that this concept became unacceptable. The casino owner had to KNOW that the advantage(HE), that he had built into his games, would stand the test of time. No Gods good or bad allowed! Scientists and mathematicians were approached for an answer. The concensus was positive. It was agreed that regardless of the variance or deviation along the way. In any large series of random trials every posibility would tend to occur in preportion to it's probability. In short the odds would "Average out" along the way. The larger the series of trials the closer the result would be to the expected average(Mean). In an infinite series, every posibility would ocur in exact preportion to it's probability.

     This became known as the "Law of Averages", and was the basis of probability for 100 years.

    When the French genius Pascal's attention was drawn to probability. He developed the theory that, "while the PERCENTAGE of deviation would DECREASE. The ACTUAL deviation tended to INCREASE." Jacob Bernoulli "proved" Pascal's theory, and issued a Theorem(proven theory) that stated:-

 " In any series of EQUALLY DISTRIBUTED Random Trials,in which the individual trials were MUTUALLY EXCLUSIVE. The larger the number of trials the closer the percentage result would be to the expected (Theoretical) MEAN. Although the actual deviation would tend to get larger."

    There are 2 points here that need clarification.

[1] EQUALLY DISTRIBUTED Random Trials. this means that each trial must be generated in the same way. You can't mix and match RGN's, wheels and coin tosses!

[2] Individual trials must be MUTUALLY EXCLUSIVE. This means that EACH TRIAL must be completely independent. Owing nothing to the past trials, and giving nothing to the future trials.

     This became known as the "Law of Large Numbers", and is the basis of modern probability. The problem is that it seems the same as the "Law of Averages". Even today many punters don't recognize the difference.

     On it's own, the law doesn't make a great deal of difference. It was up to a little known mathematician, Abram D' Moivre, to put probability firmly on the mathematical map! It could be said, "That Bernoulli qualified probability, D'Moivre quantyfied it!"

     D'Moivre's Theorem, the basis of the central limit theorem, states:-

    The extent to which the actual result will diverge from the theoretical expectation is a funtion of the square root of the number of trials. This divergence, known as the STANDARD DEVIATION can be calculated using the formula :-

     SD = the square root of (n x p x q)
  Where
     SD = Standard Deviation
      n = Number of trials           eg. for EC bets.    SD = sq rt(n x 18/37 x 19/37)
      p = positive Probability           for 100 trials  SD = sq rt(100 x 18/37 x19/37)
      q = negative Probability                              = sq rt(24.98)
                                                            =  4.998

    The theorem goes on to state:-
    "That 68.3% of the time the divergence would be one SD or less. Either side of the MEAN.
    "That 95% of the time the divergence would be  2 SD's or less.  Either side of the MEAN.
    "That 99.7% of the time the divergence would be 3 SD's or less. Either side of the MEAN.
    "That only 0.3% of the time would the divergence exceed 3 SD's

    Not only does this Theorem offer an explanation of "Regression Toward the Mean", But it allows us to roughly calculate, and assess the deviations, that are a common factor in any series of random trials.

     It must be stressed that neither the Law of Large Numbers or the Central Limit Theorem is absolute. When a theory is "proved" both it's positive and negative aspects are included. The "fudge factor" that is a basic tenet of probability is still in force.
    There is no way known to man to accurately calculate a probability! If you use the "Law of Large numbers" or SD's to calculate. About the best you can hope for is that 2/3rds of the time you might only be 1 SD off target. Much of the time you could be up to 2 SD's off target! How does this translate into figures? Not well for the punter! The number of trials is far too small for any degree of accuracy. The "fudge factor" is just too large.

    There is one other factor that must be taken into account when working with short random trials. It is a theory, and no proof is offered. That is the 'RANDOM WALK THEORY" It is obvious that every trial in a series changes the percentage of deviation, and possibly it's DIRECTION. Unlike the SD the "random walk produces sharp zigzags in the short term, rather than the slower, average, waves of the SD.  It is in the peaks and valleys of these short term zigzags that the punter will find the best chance of defeating probability.

      Harry

     



 
The following users thanked this post: kav, Reyth, slpcorner

kav

  • www.Roulette30.com
  • Administrator
  • Hero Member
  • *****
  • Posts: 1600
  • Thanked: 652 times
  • Gender: Male
Re: HOW PROBABLE IS PROBABILITY?
« Reply #1 on: September 28, 2016, 01:38:41 PM »
Amazing post Harry!
Thanks.
 

Harryj

  • Veteran Member
  • ****
  • Posts: 353
  • Thanked: 156 times
  • Gender: Male
Re: HOW PROBABLE IS PROBABILITY?
« Reply #2 on: September 28, 2016, 01:45:50 PM »
  The problem facing math buffs, is the fact that maths treats probability as a 2 dimensional problem. This is only true in the long term. Where the Law of Large numbers and the central limit theory operate. The huge short term deviations, and constant changes in the direction of flow, can only be assessed within the random walk theory. This adds a 3rd dimension to short term play, and literally changes everything.

     It is easy to relegate random walk to a wild theory, but any graph of normal flow will illustrate it clearly. Nor does it stop as the number of trials increases. Played spin by spin it is obvious that random flow owes much more to random walk. Than,for example, Regression Toward the Mean.

    Random flow doesn't move backward and forward like the tide. It occurs in ragged blocks. Dominated by random walk. The averages along the way do not occur smoothly. As regression and the law of large numbers might suggest. Nor can the vagaries of the walk be calculated. Probability offer us no real clue as to how far a deviation might wander from the mean. D'Moivre's SD's indicate a possible limit, but the fudge factor is huge. Over 30% and 25% for the more common 1 and 2 SD's.

    So the question remains how probable, or more important, how useful is probability to the serious gambler.

         Harry
 

Reyth

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 3192
  • Thanked: 915 times
Re: HOW PROBABLE IS PROBABILITY?
« Reply #3 on: September 28, 2016, 04:41:16 PM »
I bypass math and go right to statistics with actual trials.  Your theory works! 

Your theorem:

rare events approached independently will not repeat very often for very long

This is probably the only thing I have found in roulette that actually works.  The key is discovering how to apply it for a profit!

Thank you Harry-Pales(R)!

« Last Edit: September 28, 2016, 05:35:30 PM by Reyth »
 

Bayes

  • Veteran Member
  • ****
  • Posts: 651
  • Thanked: 529 times
  • roulettician.com
Re: HOW PROBABLE IS PROBABILITY?
« Reply #4 on: September 29, 2016, 09:05:34 AM »
    There are 2 points here that need clarification.

[1] EQUALLY DISTRIBUTED Random Trials. this means that each trial must be generated in the same way. You can't mix and match RGN's, wheels and coin tosses!

[2] Individual trials must be MUTUALLY EXCLUSIVE. This means that EACH TRIAL must be completely independent. Owing nothing to the past trials, and giving nothing to the future trials.
   

Harry, nice post. I have to take issue with the above, though. Regarding point [1], if the outcomes are equally likely, and that's all you know, why can't you "mix and match"? Random is random.

The suggestion is that probability is a "property" of the physical medium generating the outcomes, so it's believed to be "objective" in that sense. But in reality, probability is just a measure of our ignorance - it's not something "out there" in the world. This is obvious when you think about it, because if I have knowledge about a particular wheel, RNG or coin which someone else doesn't have (for example, that it's biased or the outcomes aren't independent) then my betting may be very different from someone who doesn't know it.

With regard to [2], mutually exclusive doesn't mean the same thing as independent. In fact, if events are mutually exclusive then they can't be independent. If one event is independent of the other then knowing one doesn't affect the probability of the other, but if we know that events are mutually exclusive, then knowing one means the other cannot occur, which implies dependence, not independence.

 
The following users thanked this post: slpcorner

Harryj

  • Veteran Member
  • ****
  • Posts: 353
  • Thanked: 156 times
  • Gender: Male
Re: HOW PROBABLE IS PROBABILITY?
« Reply #5 on: September 29, 2016, 04:03:07 PM »
   Hi Bayes,
            I deliberately used phraseology similar to that in the original Theorem. It may not sound quite right after 300 years.

 [1]      Remember there were not many precise ways to produce random trials then. Bernoulli was trying to make sure that the trials were not distorted by faulty process. Hence "Equally Distributed", Making sure that each trial was "Equal" or the same.   Random number generators have come a long way since then. I will not dispute that different methods can now produce "equal" results.
        Certainly there is no suggestion that "probability' is a product of the method.

  [2]      "Mutually Exclusive".  The phrase has changed it's meaning somewhat over time. It is quite clear that Bernoulli meant 'Completely Independent",. which would have meant something slightly different at that time. If you have doubts. Ask yourself how the Individual trials could be random if they effected each other.

    Harry
 
The following users thanked this post: Reyth