PDA

View Full Version : Bell Curve Approximations: Because 10,000d2 is too hard to roll



Yitzi
2012-06-18, 04:23 PM
I originally devised this for my upcoming system rework, but it's a potentially useful idea anyway, so I figured I'd post it separately so everyone can use it.

The basic idea is as follows: Say you've decided (as part of your worldbuilding) that an arbitrary person has a 1 in 100 chance of being more adept at combat than a 1st level commoner, a person who is adept at combat has a 1 in 100 chance of having PC levels, and someone with PC levels has a 1 in 2 chance of being an adventurer. Now you want to know the total number of adventurers in your world of a billion people.

The simplest approach is just to take the average: 10 million are adept in combat, 100,000 have PC levels, so 50,000 are adventurers. But that gives numbers that are too round, and you might want to give your world a more organic feel.

On the flip side, rolling a d100 for each of the billion people, then a d100 for each time it came up 100, then a d2 for each time it came up 100 would work, but that's the sort of thing that will take even a computer a long time.

In the middle is this method. It's based on the idea that if you roll any die enough times and add it, it will approximate a bell curve, and a bell curve is determined only by its mean (average) and variance (the square of the standard deviation). Thus, you can calculate the mean and variance, and roll a smaller number of dice with the same mean and variance (or at least the same mean and a very close variance), and your results will be extremely close for a relatively small number of die rolls.

Fortunately, the mean and variance of a die roll are easy to calculate. A normal 1dX has mean of (X+1)/2, and variance of (X2-1)/12. For multiple die rolls, you add the mean and variance, so for instance a roll of 3d6 has a mean of (7/2)X3=10.5 and variance of (35/12)X3=8.75. A constant addend has 0 variance, so the best approach is usually to set the variance where you want it, then add a constant to adjust the mean.

For a "1 in X" chance roll (e.g. 1 in 100 means there's a 1% chance of getting 1 and 99% chance of getting 0), the mean is 1/X, and the variance is (X-1)/X2.

In order to approximate the bell curve, you should use at least 3 of each dX that you'll be using (you can use more than one size, and if you have a few rolls left over you can do those exactly), and use it only to approximate sets of "1 in X" chance rolls where the number of rolls is at least close to X^2 (so the average value is not much smaller than the "die size"). For cases where the average value is much smaller than the "die size", but still too large to roll normally, I plan to later post another approach (approximation via Poisson distributions); in fact, the Poisson distribution approach, though somewhat harder to do with normal D&D gaming tools, is considered to be better when the number of dice is less than roughly one-eighth the cube of the "die size".

Of course, this is a lot more useful if you don't have to actually calculate which dice to use, so here are some things to use, at least for "1 in 2" (up to a million rolls) and "1 in 100" (up to a billion rolls):

For "1 in 2":
For 1 roll, just use 1d2-1 (of course).
For 5 rolls, use 1d4
For 21 rolls, use 1d8+6
For 33 rolls, use 1d10+11
For 133 rolls, use 1d20+56
For 833 rolls, use 1d50+391, if you'd rather roll a d50 than several d20s.
For 3,333 rolls, use 1d100+1,616
For 13,333 rolls, use 1d200+6,566
For 83,333 rolls, use 1d500+39,166
For 333,333 rolls, use 1d1000+161,666

For "1 in 100" (the following are approximate in the variance):

For low numbers, this method is very much not advisable.
For 1,300 rolls, use 2d4+2d8-1.
For 3,350 rolls, use 1d20+23
For 21,050 rolls, use 1d50+185
For 84,150 rolls, use 1d100+791
For 673,400 rolls, use 2d200+6,673
For 2,104,350 rolls, use 1d500+20,793
For 16,835,000 rolls, use 2d1000+15,834 (this is exact in the variance.)
For 105,218,800 rolls, use 2d2500+1,049,687
For 420,875,400 rolls, use 2d5000+4,203,753