Take 3.5

Players of a certain popular RPG system will be familiar with the notion of “taking 10”, accepting a central result instead of going through the trouble of rolling a die. GURPS uses the standard d6, with a mean result of 3.5. Based on discussions held in the unofficial GURPS Discord, some GMs find it simpler, when dealing with the massed die rolls associated with larger scales and higher tech levels, to assume this mean was attained.

However, while this property is indeed true – the Law of Large Numbers dictates that as the number of dice rolled approaches infinity, so the mean of the sample rolled will tend to the statistical mean of the die. This is highly impractical in-game, as very few players or GMs attempt to roll infinite dice and even fewer have succeeded.
The nature of this problem offers its own solution   find the minimum suitable number of dice that must be rolled before the GM can with reasonable certainty assume that the mean of the set was 3.5 and forego rolling entirely.

We begin with a somewhat stringent interpretation: how many dice are required, at a 95% level of significance, to ensure a mean result between 3.49 and 3.51?

Per the Central Limit Theorem (hereafter “CLT”), the sum of uniformly-distributed die rolls will converge to a normal distribution. As the number of dice rolled tends toward infinity, so the sample mean will tend to a perfect point – the expected value of an individual die roll – but until then it assumes an increasingly-steep normal distribution with mean 3.5 and variance \sqrt{\frac{2.92}{n}} 1.

As such, the calculations are fairly trivial if the CLT applies – as long as the sample is sufficiently large, the sample means will tend to the distribution mean – as such the confidence interval for the sample mean is \bar{x} \pm z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}}. For the situation above, we allow ourselves a total error of 0.2 and hence solve for the sample size in the equation z_{\frac{\alpha}{2}}\frac{\sigma}{\sqrt{n}} = 0.01, which, at a 95% level of significance, yields z_{0.025}\frac{\sigma}{\sqrt{n}} = 0.01.
Hence 1.96 \sqrt{\frac{2.92}{n}} = 0.01
Hence \sqrt{n} = \frac{(1.96)(1.71)}{0.01}
Hence n = {(196)(1.71)}^2
Hence n = 112174.72

It thus seems highly impractical to make such an assumption – very few GMs will invest over 100000d into a single roll. A slightly looser confidence interval may thus be accepted by all but the most stringent – perhaps at 5% to match the level of significance. This yields a confidence interval of 3.325 \leq \bar{x} \leq 3.675, close enough for most, alongside z_{0.025}\frac{1.71}{\sqrt{n}} = 0.175 at a 95% level of significance.
Using the same calculations as above, this yields n=366.2848 , with 367d being a figure which could conceivably appear in a game.

Perhaps, however, adherence to the strictest margins isn’t sought and you only wish to know that the mean is somewhere between 3 and 4, which is rather vague but may still satisfy some.
At a 95% level of significance, z_{0.025} \frac{1.71}{\sqrt{n}} = 0.5
Therefore \sqrt{n} = \frac{(1.96)(1.71)}{0.5}
Therefore n = 44.869888
Thus, a mere 45d will assure, in 19 cases out of 20, that the mean result of the rolls will lie between 3 and 4, enough for the particularly lazy to conclude is probably pretty close to 3.5. This is the furthest we will go, as the CLT ceases to hold once sample sizes grow sufficiently small.

So, what do we make of this? For those who do not wish to roll masses of dice, we have a way to abstract large numbers of dice, albeit one which many already use.
For nitpicky players, we have a way to complain about GMs abstracting away dice rolls, especially those with quantities of dice lower than 45 or,on the very border of tolerance, 302.

As a final note, to any GM using this method to abstract dice, please note that these calculations are valid at a 95% level of significance, which is usually good enough. For extra realism, feel free to accept this result for the most part but, on a 1 on 1d20 (or, with a bit of extra inaccuracy, 16+ on 3d), decide on whatever mean outside of the margin feels appropriate (invalidating all the accuracy gained in this method, unless “just outside” feels suitable almost all such situations).

 

 


1. The variance of a discrete uniform distribution unif{a,b} is defined as \frac{(b-a+1)^2-1}{12} and so 1d, which follows a unif{1,6} distribution, yields \sigma^2 = 2.92.
2. A sample size of 30 is about the lowest at which the CLT will hold to an appreciable extent and, at a 95% level of significance, yields a confidence interval of \bar{x} \pm z_{0.025} \sqrt{\frac{2.92}{30}} = \bar{x} \pm 0.611486494808621 (to 15 significant digits), or 2.885 \leq \bar{x} \leq 4.111 (to 4 significant digits).

One thought on “Take 3.5

Leave a comment