PDA

View Full Version : [Mechanics] In which I realise I like the bell curve

Kiero
2008-12-15, 07:30 AM
I'm playing two different systems right now, WFRP and it's percentiles, and Saga Edition which is d20.

I like certain elements of the systems (though to be honest I like the games as much in spite of the system in both cases, it's more about the people...), but one thing really bugs me about both. The core resolution mechanic.

Normally I prefer dice pools. Not big ones, Exalted levels of bucket o'dice is not my idea of fun. But small ones I like. I wasn't sure exactly why until someone pointed it out in another discussion. Multi-dice/pools operate along the lines of the bell curve, character competence is somewhat predictable. You're a lot more likely to get a result in the middle than anything else.

Apparently single-die systems are linear - every result is equally likely. Which means you can see-saw between raving incompetence and unlikely mastery. I've had quite a few examples of both in the games I've played.

Early on in the Saga game, I seemed incapable of rolling higher than 10. Then later I had a string of high-teens rolls. In the WFRP game, it's a running joke that my character can't hit anything with his bow, even though for a while his BS was better than his WS. I must have fired off about fifteen arrows now, he's hit once. Even with situational modifiers that pushed his chances up quite a bit. Yet in melee combat, I seem to roll better (although they can be dodged/parried which pushes the whiff factor back up again).

I like predictable competence. I like knowing that having a certain level of skill means certain things are highly likely to be possible. As in likely in practice, not simply in terms of the probabilities.

Are there any simple fixes for D100 or D20 games? I was thinking for example rolling D10+D12-1 in place of a D20 (and only a 21 is a critical). Or simply 2D10, with 2 and 20 being miss/critical. How does that affect the probabilities?

Thoughts?

monty
2008-12-15, 07:41 AM
2d10 would certainly be more normalized than 1d20, yes.

2008-12-15, 07:45 AM
There are upsides and downsides to it.. The up being, of course, that results are more reliable. The downside is that D&D is balanced, at least in part, on the assumption that every result on a die roll is equally as likely to come up as any other, and changing that could result in unforseen problems up ahead.

Tsotha-lanti
2008-12-15, 07:46 AM
http://www.skepdic.com/selectiv.html

http://www.skepdic.com/representativeness.html

http://www.skepdic.com/selectionbias.html

Your problem is illusory. With a system like Warhammer FRP, you know precisely what your odds of success are. If your modified WS for an attack is 50%, that's precisely your odds. If you wrote down 10,000 attacks on your bow, they would come out to fairly close to 100 for each possibly d100 result (I can't say how close, off-hand; with so many different results, the variation is considerable, but it will inevitably be within the limits allowed by chance, and the more repetitions you do, the closer to the theoreticaly average it will be).

With d20 systems, the deal is the same. When you look at your bonus and the DC, you see precisely the chance you have to succeed.

What multiple dice - like the 3d6 of GURPS - do is make the actual average result more common. 10 and 11 are not the most common results in d20, and 50 isn't the most common result in Warhammer. However, 10 and 11 are the most common results in GURPS, rolling 3d6; indeed, the two make up 25% of all results (there being 16 different ones).

This does add predictability, in a way, especially with high skill. The higher your skill in GURPS (or a similar system), the less your chances of success are affected by negative (or positive) modifiers. A change in your modified skill from 17 to 16 is negligible; a change from 11 to 10 is -12.5% chance of success.

Incidentally, the chance of getting 21 on 1D10+1D12-1 is 0.8%, or less than than 1/6th the chance of getting a 20 on d20 (5%). To get all the chances, you just need to make a matrix with the D10 on one side (say, horizontal), the D12 on the other (say, vertical), then record the results. Like so:

X 1 2 3 4 5 6 7 8 9 10 11 12
1 1 2 3 4 5 6 7 8 9 10 11 12
2 2 3 4 5 6 7 8 9 10 11 12 13
3 3 4 5 6 7 8 9 10 11 12 13 14
4 4 5 6 7 8 9 10 11 12 13 14 15
5 5 6 7 8 9 10 11 12 13 14 15 16
6 6 7 8 9 10 11 12 13 14 15 16 17
7 7 8 9 10 11 12 13 14 15 16 17 18
8 8 9 10 11 12 13 14 15 16 17 18 19
9 9 10 11 12 13 14 15 16 17 18 19 20
10 10 11 12 13 14 15 16 17 18 19 20 21

There's 120 possible results, and the odds of any one are easy ot see. A 2 is 2/120, a 5 is 5/120, 19 is 3/120 etc.

The result is stupidly complex and weighed heavily on the middle results, 8-12.

In summation, I suggest studying statistics some. It's useful for anyone who's really interested in how RPG mechanics work and compare to each other.

Kiero
2008-12-15, 08:09 AM
There's nothing illusory about it. I'm not talking about probabilities, I'm talking about outcomes. Multiple dice systems have a bell curve skewing towards the results in the middle. That's higher predictability of results in the middle, and less likely to get results at the extremes.

As to studying statistics, sorry, but no thanks. I don't study anything for fun, and I can't imagine anything less enjoyable than stats. That's why I ask people who do have an interest in that area.

Kantur
2008-12-15, 08:19 AM
For changing d20 to a bell curve system:

Will help you change it to a 3d6 based system and help alter the threat values of weapons and so forth and alterations to the Luck Domain for Clerics, automatic successes and fails and CRs, etc.

As for changing a D100 based system, I don't have any ideas off hand, and it'd probably have to change for each D100 system anyway (Alterations for games with weapons with a chance of jamming, the have automatic success or fail within certain values, Impale attacks in BPR/Call of Cthulhu, changes to situational modifiers...)

Tsotha-lanti
2008-12-15, 08:20 AM
My point was that what you rolled is irrelevant. Chance is always going to be involved, and although the results will always follow the averages, any particular series of rolls may be low or high. Multiple dice just make for slightly more complicated calculations when figuring out the odds.

Edit: Changing d20 to bell curve is one of the worst ideas ever. That got taken apart in the last thread on the topic, I think, but basically it would require a complete and complicated reworking of everything. Rogues will suck way more than fighters at hitting things, etc. etc.

Kiero
2008-12-15, 08:31 AM
I'm not talking about D&D, I'm talking about Star Wars Saga Edition. Which is subtly different.

Tsotha-lanti
2008-12-15, 08:45 AM
The effect isn't really any different. In both cases, you're way less likely to succeed at difficult tasks, and the lower your bonus, the more it hurts you. Making it balanced would require re-working all DCs etc., and even then the math would be too complicated.

That's why in "roll + bonus VS difficulty" systems, you don't usually use multiple dice as the roll. It gets way complex. It works much better in "roll under difficulty" systems.

Kizara
2008-12-15, 08:46 AM
I'm playing two different systems right now, WFRP and it's percentiles, and Saga Edition which is d20.

I like certain elements of the systems (though to be honest I like the games as much in spite of the system in both cases, it's more about the people...), but one thing really bugs me about both. The core resolution mechanic.

Normally I prefer dice pools. Not big ones, Exalted levels of bucket o'dice is not my idea of fun. But small ones I like. I wasn't sure exactly why until someone pointed it out in another discussion. Multi-dice/pools operate along the lines of the bell curve, character competence is somewhat predictable. You're a lot more likely to get a result in the middle than anything else.

Apparently single-die systems are linear - every result is equally likely. Which means you can see-saw between raving incompetence and unlikely mastery. I've had quite a few examples of both in the games I've played.

Early on in the Saga game, I seemed incapable of rolling higher than 10. Then later I had a string of high-teens rolls. In the WFRP game, it's a running joke that my character can't hit anything with his bow, even though for a while his BS was better than his WS. I must have fired off about fifteen arrows now, he's hit once. Even with situational modifiers that pushed his chances up quite a bit. Yet in melee combat, I seem to roll better (although they can be dodged/parried which pushes the whiff factor back up again).

I like predictable competence. I like knowing that having a certain level of skill means certain things are highly likely to be possible. As in likely in practice, not simply in terms of the probabilities.

Are there any simple fixes for D100 or D20 games? I was thinking for example rolling D10+D12-1 in place of a D20 (and only a 21 is a critical). Or simply 2D10, with 2 and 20 being miss/critical. How does that affect the probabilities?

Thoughts?

I use 2d10 for my 3.5 gaming. I found that it has a great many desirable effects and serves to fix many inherent problems with the system for comparitively little effort. I have made adjustments to crits to compensate, but honestly they are still marginalized, and really that cost is wroth the enormous benefits gained.

For a look at a more drawn-out discussion, check out the link in my sig to the "2d10 Variant".

Prometheus
2008-12-15, 11:19 AM
Yeah I think what Tsotha-lanti is trying, so right now the current odds is how the game is played, but you are right in that you will receive more "regular" die rolls with more die (that is, average). If you changed it so the odds were the same, well, it wouldn't matter very much that you changed to the new system because your odds would be exactly the same due to the rescaling everything. However if you don't change the odds, well, you are going to be changing how things balance slightly, that might be exactly what you need and maybe not be. You'll have to give the 2d10 for 2d20 a try.

Kurald Galain
2008-12-15, 11:24 AM
That's why in "roll + bonus VS difficulty" systems, you don't usually use multiple dice as the roll. It gets way complex. It works much better in "roll under difficulty" systems.

How's that? Mathematically, "XdY + stat > target number" works out to the exact same thing as "XdY < stat". Unless you're suggesting that addition gets way complex.

A perennial problem with the D&D skill system, in particular (it's less noticeable in combat) is that the spread of the dice (1-20) is much, much larger than the spread of the stat+skill. This means that whether you succeed on a skill check is more strongly dependent on randomness, than on whether your character is actually skilled.

(you know, highly intelligent sage character with +15 knowledge foo rolls a 2, whereas dumb fighter with a +0 total rolls an 18; such things happen way too often)

Tsotha-lanti
2008-12-15, 11:28 AM
Rolling multiple dice (a set number, such as 3d6) under a number if simpler, because only the target number changes.

When you're rolling dice (again, a set number, such as 3d6), adding a bonus, and then comparing to a target, you've got big variables at both ends. When you bolt it onto the D&D system, for instance, you are changing the significance of the bonus, and of the target number. This gets really complicated when you try to re-balance the system.

Raum
2008-12-15, 11:33 AM
I like predictable competence. I like knowing that having a certain level of skill means certain things are highly likely to be possible. As in likely in practice, not simply in terms of the probabilities.A question - do you want more average results or simply more competence? Linear results can still provide a high level of competence but they won't make average rolls more likely as dice pool mechanics do.

Are there any simple fixes for D100 or D20 games? I was thinking for example rolling D10+D12-1 in place of a D20 (and only a 21 is a critical). Or simply 2D10, with 2 and 20 being miss/critical. How does that affect the probabilities?Either of those will work to increase average results while decreasing extreme results. d20 also has the 'take 10' rule for average results. Another possibility would be a Savage Worlds style wild die. Easier to fit in to d20 than WFRP.

That does bring up a question on game flavor - WFRP is meant to be gritty, dirty, and rife with disease and maiming injuries. Changing the die mechanic will affect the game's style and flavor. Is that what you want?

Matthew
2008-12-15, 11:45 AM
A question - do you want more average results or simply more competence? Linear results can still provide a high level of competence but they won't make average rolls more likely as dice pool mechanics do.

Indeed. Reducing everything to a base probability is the easiest way to see how the factors interact. In something like Warhammer, the probabilities are more obvious because they are frequently described in percentage terms [e.g. you literally have a 30% chance of hitting an Orc or acquiring an insanity point, or whatever]. The dice are not the problem, as they are just random number generators, the issue is with the base probability of successful task resolution.

2008-12-15, 12:04 PM
I've had this discussion with fusilier before, and what Tsotha-lanti says is pretty much spot on.

I realize he is typing as I type, but I'm going to take a stab at what he meant by 'illusory.'

Let's say that your game is balanced around average (define average however the individual GM wants) people being 50% successful. This means that on your d20, you want to roll 11 or better. On a 3d6, you want to roll an 11 or better. If, for some reason, your core mechanic is d4, you want to roll a 3 or better.

No matter what your mechanic is, your DM has already balanced the game so that you succeed 50% of the time. The 'illusion' is that the bell curve provides predictability, whereas it actually does not, you still have a 50% chance of success, no more and no less.

Now, of course, you being players of powerful adventurers, are going to have a very good question. We create above average characters! Therefore this doesn't apply to us!

Doesn't it? Your GM will scale your difficulty level based on your abilities. If you're level 20 and not level 1, it just means you're fighting dracoliches and sith lords instead of rats and bothans, which are going to be tougher and have higher defenses which counteract your own more skillful attacks.

Ah! But you may have one last question. The GM can't balance things for everyone around 50% (or may not want to). After all, fighters (or jedi) are going to be better at hitting things in melee than wizards (or stormtroopers). This is where the difference between the two systems emerges.

Let's say that you're playing someone who, for some reason or another, has a 55% chance to hit your "average 50% badguy". On a d20 system, that'd be a "+1" when you're trying to hit 11 or better. On a 3d6 system well... there isn't really an analogue, a "+1" to your skill level results in a 62.5% chance to hit. So while the d20 system can address 55% to hit and 60% chance to hit, the 3d6 system cannot. Now, the 3d6 system can address 95.3 and 98.1 chance to hit (the d20 only getting the 95% chance to hit), but at that high of a probability, does the extra 2.8% chance really matter (that's less than the chance of rolling a automatic failure in D&D)?

If you look at things from the perspective of plusses instead of percentages, the problem persists: If your fighter has a +1 and your wizard has a -1, they already have a difference in ability of 25% when rolling to hit a 11 or better. It gets worse from there -- A fighter with +3 and a wizard with a -3 have a skill difference of 67.6 percent. At further extremes, this basically means that encounters that have any chance at all of wizards having a heroic "oh my god I luckily saved your life with my dagger" moment will be too easy for the fighter "ho hum, I hit it again" and any encounters that are hard enough to challenge the fighter will be impossible to hit for anyone else.

What the 3d6 system does is take away granularity of ability levels away from the GM where he cares the most about it -- right around the 50% mark.

Sidenote to Kurald Galain:
Spread of dice vs. spread of stat+skill actually doesn't matter that much, either. If your highly intelligent sage is rolling a d20+1000, that just means that he has to hit a DC 1011 -- and that the dumb fighter is going to have a d20+985 (Unless, of course, you like the idea of certain characters being completely useless at specific tasks, which doesn't seem very heroic or fun to me -- or the idea of certain characters always succeeding at specific tasks, which has its own problems). In this case, a roll of 2 for the sage will still fail, and a roll of 18 for the dumb fighter will still succeed.

tl;dr: Bell curve takes granularity of power away from the GM where he needs it most, and doesn't provide much of a benefit to players in reducing 'randomness' -- only exaggerating differences in player abilities (in a bad way)

Epinephrine
2008-12-15, 12:06 PM
Are there any simple fixes for D100 or D20 games? I was thinking for example rolling D10+D12-1 in place of a D20 (and only a 21 is a critical). Or simply 2D10, with 2 and 20 being miss/critical. How does that affect the probabilities?

Thoughts?

I have been looking at using a system with more central tendency as well - I'm leaning toward 2d10 rather than 3d6 (to rplace d20) in my D&D campaign.

First, you can still get a pretty decent version of the 18-20, 19-20, 15-20 and 17-20 crit ranges using 2d10.

I found that you have to change the iterated attack rolls a bit to make them work, I think -4 per iterated attack looked like a reasoanble substitution. I made up an excel sheet and calculated the expected number of hits with different base "to hit" chances, and I seem to recall that it performed pretty similarly to 1d20 for the mid-range of "to hit" values.

I think one might also need to adjust a few other things - the relative BAB scores may need adjusting; a +4 difference in BAB is much bigger on 2d10 than on 1d20.

Some penalties and bonuses may need changing. DCs for saves become tricky. And so on.

I have ended up NOT doing it so far, because of these concerns. Save-or-die and save-or suck spells can become really absurd, as boosting DCs results in incredibly rare saves (or incredibly easy saves, on the other end). BAB and AC adjustments can be a bit tricky if you are trying to maintain balance.

I have contemplated introducing 2d10 for skill checks alone; with the options being:
take 10 (base, easy)
take 20 (100 times the time needed)
take 17 (10 times the time needed)

Since I don't like huge variability in skill rolls it can solve these without affecting the to-hit/save DCs really. For skills I much prefer to let the number of ranks/skill modifier determine more of the effect than allow chance to determine things.

Keld Denar
2008-12-15, 12:12 PM
I think statistics is why they took greataxes away from vanillia orcs in the 3.0 to 3.5 edition switch in D&D. Greataxe is a 1d12, and orcs get a +3 damage from Str. That means your top end damage is 15, which is enough to drop a 12 CON wizard from full health to -10 in on shot, and take a full health 18 CON fighter to dying (-1). Statistically, you had a high probablility of killing your level 1 players any time there were orcs around. So...they swapped them out for Falcheons. Falcheons have a smaller die range (2d4, max 8) which results in 11 top end damage. Now, 1 hit to a solid fighter is gonna put him in danger, but not outright drop him in 1 hit (unless it crits). Also, the 2d4 mechanic tends to favor damage in the 5 range, so a hit from an orc has a higher than average chance to deal 8 points of damage, and only a low chance to do the full 11.

All in all, 3.5 increased the survivability of level 1 characters, at least where orcs were involved. YAY!

2008-12-15, 12:31 PM
First, you can still get a pretty decent version of the 18-20, 19-20, 15-20 and 17-20 crit ranges using 2d10.

Switching to 2d10 will actually mess up those crit ranges quite a bit. It will favor weapons with a wider range rather than a big crit bonus because higher numbers will come up less often.

Consider greataxe vs greatsword. 20/x3 or 19-20/x2. Those are balanced using a d20 where you have a 5% chance of rolling a 19 and a 5% chance of rolling a 20. In 2d10 you'll have a 1% chance of rolling a 20 and a 3% chance of rolling a 19. The greatsword will crit 4% of the time (19s and 20s) but the greataxe only 1% of the time. In this system the greatsword crits 4 times as often as the greataxe but in standard d20 it only crits twice as often. That unbalances things in favor of the greatsword. I'm sure other weapons will get screwed up in a similar fashion.

Kurald Galain
2008-12-15, 12:43 PM
Switching to 2d10 will actually mess up those crit ranges quite a bit. It will favor weapons with a wider range rather than a big crit bonus because higher numbers will come up less often.
People claiming this are mistakenly assuming that somehow a great deal of thought was put into balancing crit ranges on D&D weapons. Whereas in fact the numbers are only based on convenience, and are completely arbitrary.

1/20 = 5%; 6/100 = pretty close to 5%, which corresponds to "critting on a 18+". 19+ on 1d20 corresponds mostly to 17+ on 2d10. None of this is any more or any less balanced then anything else.

Epinephrine
2008-12-15, 12:54 PM
Switching to 2d10 will actually mess up those crit ranges quite a bit. It will favor weapons with a wider range rather than a big crit bonus because higher numbers will come up less often.

Consider greataxe vs greatsword. 20/x3 or 19-20/x2. Those are balanced using a d20 where you have a 5% chance of rolling a 19 and a 5% chance of rolling a 20. In 2d10 you'll have a 1% chance of rolling a 20 and a 3% chance of rolling a 19. The greatsword will crit 4% of the time (19s and 20s) but the greataxe only 1% of the time. In this system the greatsword crits 4 times as often as the greataxe but in standard d20 it only crits twice as often. That unbalances things in favor of the greatsword. I'm sure other weapons will get screwed up in a similar fashion.

No, I meant that they can be converted well:

There are 100 ways 2d10 can fall.
20 = 10,10 = 1%
19 = 10,9; 9,10 = 2%
18 = 10,8; 9,9; 8,10 = 3%
17 = 10,7; 9,8; 8,9; 7,10 = 4%
16 = 10,6; 9,7; 8,8; 7,9; 6,10 = 5%
15 = 10,5; 9,6; 8,7; 7,8; 6,9; 5,10 = 6%
14 = 10,4; 9,5; 8,6; 7,7; 6,8; 5,9; 4,10 = 7%

20 (5%) -> 18-20 (1%+2%+3%=6%)
19-20 (10%) -> 17-20 (1%+2%+3%+4%=10%)
18-20 (15%) -> 16-20 (1%+2%+3%+4%+5%=15%)
17-20 (keen on 19-20, 20%) -> 15-20 (1%+2%+3%+4%+5%+6%=21%)
15-20 (keen on 18-20, 30%) -> 14-20 (1%+2%+3%+4%+5%+6%+7%=28%)

So you see, you can keep the same rough intervals. You get slightly more bang for your buck out of a Keen 19-20 weapon than out of a Keen 18-20, but the Keen 18-20 is still the biggest total crit chance.

x4 crit weapons end up with the best overall damage output from crits, but they don't benefit from crit-based enchantments very well. A Keen x4 damage weapon ends up equivalent to a 30%, rather than the 28% that you get from a keen 18-20 weapon, but they are all in about the right distribution - I doubt anyone would notice any real difference in play.

Mark Hall
2008-12-15, 01:01 PM
All that aside, I'd prefer to go 2D10 over 3D6 for my base probabilities.

While 3D6 results in a better bell, you have to rework a lot of the assumptions of the game (in 3 and 4, especially) to accommodate it... in 3.x, a 3D6 roll means you need to completely rethink criticals, and a bonus like Keen/Imp. Crit. becomes a lot bigger.

2D10 reduces this a lot. Criticals become much rarer (about a fifth as common), and the numbers all match up on the high end. It also makes critical failures (at 2, which is 1%) rare enough to consider using them (though I'd probably use a variation I used for D6 Star Wars, where you still succeeded if your total was high enough, but with some sort of mishap).

Raum
2008-12-15, 01:29 PM
tl;dr: Bell curve takes granularity of power away from the GM where he needs it most, and doesn't provide much of a benefit to players in reducing 'randomness' -- only exaggerating differences in player abilities (in a bad way)This is a unique point of view. I don't see how it's possible either - it works well in many games. I suspect you may be confusing a result's 'granularity' with its 'linearity'. If not, can you explain?

Also, I'm not a big fan of artificially setting a 50% (or any other number) success / failure rate and maintaining it as PCs improve. There's not much point to improving if it doesn't make something better or easier. Treadmills get boring.

Kurald Galain
2008-12-15, 01:33 PM
Also, I'm not a big fan of artificially setting a 50% (or any other number) success / failure rate and maintaining it as PCs improve. There's not much point to improving if it doesn't make something better or easier. Treadmills get boring.
That's a good point, too - although I must point out that one of the design principles for 4E was to set a fixed success rate (the so-called sweet spot) and maintain this as PCs improve. This is why difficulty classes (or AC) improve at the same rate as your skills (or attack rolls) do.

Raum
2008-12-15, 01:53 PM
It's also one reason I tend to prefer skill based systems over level based systems. In a skill based system you don't need to escalate all of the opponents' power levels for them to remain a threat.
-----
Back to the original subject, systems based on a curve (bell or not) may be used to simulate reality better than a linear system. They make an average result more likely than an extreme result while still allowing the extremes to occur. By adjusting the curve's balance a system can also decide how common basic success or failure becomes. A linear system keeps all possible results equally likely and keeps adjustments the same 'value' across the spectrum of results. Changing the target number and / or the base decides how common success or failure is.

The only real differences between the two systems are the likelihood of extreme results and how static the value of a bonus is. Neither method is inherently better or worse.

Kurald Galain
2008-12-15, 02:12 PM
The only real differences between the two systems are the likelihood of extreme results and how static the value of a bonus is. Neither method is inherently better or worse.

On the other hand, any system where you make up statistics is going to be 38% better :smalltongue:

fusilier
2008-12-15, 02:16 PM
No matter what your mechanic is, your DM has already balanced the game so that you succeed 50% of the time. The 'illusion' is that the bell curve provides predictability, whereas it actually does not, you still have a 50% chance of success, no more and no less.

This is fine, if your system is pass/fail. However, critical failures and critical successes, or any system (or GM) which takes into account how much you fail or succeed by, changes this dramatically. While many systems aren't explicit, more often the not I've seen GM's use that metric to help describe the result. Missing a roll by 1 or 2, where your character is highly skilled, is not nearly as embarrassing as missing it by 10 or 11. GURPS routinely has rules where the amount you miss a roll effects how bad the outcome is.

If you have an average skill roll (50%). The chances of missing it by "a lot" in a linear single die system are greater than in a bell curve. So it clearly depends upon the system and the GM, and possibly the players, as to whether or not multiple dice appear better.

I'm not aware of the GM routinely setting the difficulty to be around 50% (admittedly the GM rarely tells me what the difficulty is) -- I would set it to what I thought was appropriate for the situation. However, if D&D is simply pass/fail, and the difficulty is 50%, you don't need a d20 to play, just flip a penny! :-)

---
Another option is to change the amount of dice rolled, to represent the skill level of the character, a la West End Games. While this might have issues of power creep, it does indicate that the better the skill the more consistent the performance (the shape of the bell curve becomes more pronounced, or pointed toward the center, with more dice).

An interesting thing about statistics, is that there are different ways of looking at it. How we generate the random numbers, can matter depending upon whether or not near misses are taken into consideration.

-Fusilier

Cainen
2008-12-15, 04:30 PM
That's a good point, too - although I must point out that one of the design principles for 4E was to set a fixed success rate (the so-called sweet spot) and maintain this as PCs improve. This is why difficulty classes (or AC) improve at the same rate as your skills (or attack rolls) do.

This also results in the stupidity of a trained, extremely competent adventurer having a pretty good chance of being unable to do anything at all during an entire encounter. Getting four or five tails in a row isn't unbelievable, after all.

And as you live and die in the short term, it averaging out won't mean anything if you end up dying before it does.

This exact situation has happened without fail to one PC in every encounter we've had in my current 4E game. It's not fun at all.

fusilier
2008-12-15, 04:33 PM
I thought of another issue -

What does +1 mean?

In a linear system like D&D, +1 simply means +5% chance of succeeding. That's a very convenient way of looking at it, if you are accustomed to thinking about the system solely in terms of probabilities.

In a system like GURPS, +1 means something different.

If your effective skill in gurps is 10 (50% chance of success), then a +1 is a big help (+12.5%). If your effective skill is 5 or 15 a +1 is not that big of a deal. Some people feel that this is unrealistic, or, if they are concerned with calculating probabilities, too difficult to keep track of. I don't really think that it is unrealistic, but it's arbitrary in my mind. This is how I see it:

+1 is a "little bit" of help.

If you are already really likely to succeed (e.g. skill = 15), then it doesn't change much.

If you are really likely to fail (e.g. skill = 5), it's still going to be a very difficult roll.

If you have an equal chance of failing or succeeding (e.g. skill = 10), then that little bit of help is more likely to make all the difference.

An example:
You need to calculate your longitude at sea. This involves some pretty complicated mathematics, you have no calculator, but you have a book of logarithms which gives you a +1. If you are really bad at math (skill = 5), chances are you won't even know what do with the logarithms in the book. If you excel at math (skill = 15), you could probably calculate the logarithms yourself, so the book isn't too useful. But, if you are merely competent at math (skill = 10), then that book of logarithms could make a significant difference in your ability to complete the calculations.

I'm sure it's also possible to come up with situations where a simple +5% per +1 also makes sense. My point is, it's possible to come up with these situations where either example can seem to make more sense, so it's simply a matter of preference.

-Fusilier

Raum
2008-12-15, 04:47 PM
On the other hand, any system where you make up statistics is going to be 38% better :smalltongue:Not sure how this is related to what you quoted (I said "static" as in unchanging not "statistics".) but I agree. You know what they say..."There are lies, damn lies, and statistics."

Edit: @fusilier - That's essentially what I was referring to when discussing whether or not the value of a bonus was static. The function of a modifier in either system is the same though - it moves the value of your function (line or curve) up or down by the modifier's value.

2008-12-15, 04:51 PM
This is a unique point of view. I don't see how it's possible either - it works well in many games. I suspect you may be confusing a result's 'granularity' with its 'linearity'. If not, can you explain?

Also, I'm not a big fan of artificially setting a 50% (or any other number) success / failure rate and maintaining it as PCs improve. There's not much point to improving if it doesn't make something better or easier. Treadmills get boring.
Sorry if I wasn't clear. By 'granularity' I mean the exact percentage of success you want a player to have. If you feel they should have a 55% chance of success, you can do that with a d20 -- but not with a 3d6.

As far as 50% being the arbitrary success point, it is rather silly. That's part of the difference between a simulation and a game, though -- games have more burden on them to be mechanically entertaining, and so having something be too difficult, or not challenging, is part of the 'fun' equation. Simulations can go more for the inherent fun of a simulation, and can afford to be more realistic as far as target numbers go.

That's a good point, too - although I must point out that one of the design principles for 4E was to set a fixed success rate (the so-called sweet spot) and maintain this as PCs improve. This is why difficulty classes (or AC) improve at the same rate as your skills (or attack rolls) do.

This is fine, if your system is pass/fail. However, critical failures and critical successes, or any system (or GM) which takes into account how much you fail or succeed by, changes this dramatically. While many systems aren't explicit, more often the not I've seen GM's use that metric to help describe the result. Missing a roll by 1 or 2, where your character is highly skilled, is not nearly as embarrassing as missing it by 10 or 11. GURPS routinely has rules where the amount you miss a roll effects how bad the outcome is.

If you have an average skill roll (50%). The chances of missing it by "a lot" in a linear single die system are greater than in a bell curve. So it clearly depends upon the system and the GM, and possibly the players, as to whether or not multiple dice appear better.

---
Another option is to change the amount of dice rolled, to represent the skill level of the character, a la West End Games. While this might have issues of power creep, it does indicate that the better the skill the more consistent the performance (the shape of the bell curve becomes more pronounced, or pointed toward the center, with more dice).

-Fusilier

Ah, I indeed forgot about this part of our discussion. Pass/fail makes 3d6 look much less interesting, but in a degree of failure mechanic it means that you can more easily control the probabilities of really screwing something up (or just barely failing, which may still be somewhat of a success; i.e. shooting someone in the torso instead of the intended headshot). Of course, it's possible to do degrees of failure in a linear system (like the d20) but the d20 then runs into granularity problems on the outside edges instead (i.e. you want to assign 1.8% chance to a downright implausible screwup, which is possible in 3d6, but in d20, without adding additional 'critical critical failure dice' the lowest you can go is 5%)

Also, I really like the concept of the West End Games improvement via dice system, but I'm not so sure about the implementation. Adding up all those dice can take a long time sometimes.

Raum
2008-12-15, 05:05 PM
Sorry if I wasn't clear. By 'granularity' I mean the exact percentage of success you want a player to have. If you feel they should have a 55% chance of success, you can do that with a d20 -- but not with a 3d6.You can't change the success rate in even 5% chunks but you can still change the success rate. That's why I think you're conflating 'granularity' with 'linearity'.

As far as 50% being the arbitrary success point, it is rather silly. That's part of the difference between a simulation and a game, though -- games have more burden on them to be mechanically entertaining, and so having something be too difficult, or not challenging, is part of the 'fun' equation. Simulations can go more for the inherent fun of a simulation, and can afford to be more realistic as far as target numbers go.Sorry, can't agree with this at all. Some games are extremely difficult to master, that doesn't make them a simulation. In fact I'd say that difficulty has absolutely nothing to do with whether or not something is a game or a simulation.

fusilier
2008-12-15, 06:22 PM
West End Games- Usually, the max roll was only 5 dice - I cannot remember rolling more than that on a regular basis. 5 d6 isn't too bad, but too many dice can be annoying.

50% -- This is kind of unrelated, but -- if the DM/GM constantly adjusts the game to keep the difficulty level the same, isn't character advancement just an illusion? I mean from a game mechanic perspective. The characters are getting better, but so are the obstacles, so everything remains balanced.

Granularity vs. Linearity -- I think I agree with Raum here. What if you want the chance of success to be 52 1/3 %? The D20 system can *only* handle percentages in increments of 5%. I would think that granularity is the total number of different states you can represent -- 3d6 has 16, d20 has 20. Just because some probability values are easier to attain in one system and vice versa, doesn't change the granularity. How do you come up with percentages in the first place, or why would you even care is another discussion . . .

Simulations, vs. Game -- I think I see what Rhaudin is saying here. What we are getting down to is playability. This can be independent from whether or not it's a "game" or a "simulation." It may be more difficult to achieve in a simulation, than a game, but I've seen plenty of "games" fail the playability test. For simulations, the best thing to do is to resort to tables. That allows the complicated math to be handled beforehand, so that the players don't have to worry about it. Lately I've been thinking of an old miniatures war game I used to play (Johnny Reb), which was more like a simulation. The rules were quite complex, but once you got the order of play down, everything else was just looking up things in tables. Also the rules made sense and were not counter-intuitive, making them easier to learn.

-Fusilier

Lorthain
2008-12-15, 08:00 PM
I had an idea I want to toss into the pot: if you are interested in being able to implement extreme probabilities (such as a ~0.005% chance of an archer hitting a very distant target) without otherwise touching system balance, you could use open-ended rolls for those special situations. For example with a d20, that would mean a '1' on the first roll results in rolling again and subtracting that new number, further rolls and subtractions occurring on '20's; a '20' on the first roll results in rolling again and adding that new number, further rolls and additions occurring on '20's.

It isn't perfect and '1' or '20' results wouldn't be possible without further complication, but open-ended rolls allow weird probabilities with minimal fuss. It might be better suited to d100 with '0-9' and '00-90' dice, where a zero result is possible and there won't be gaps in possible results.

Raum
2008-12-15, 09:45 PM
50% -- This is kind of unrelated, but -- if the DM/GM constantly adjusts the game to keep the difficulty level the same, isn't character advancement just an illusion? I mean from a game mechanic perspective. The characters are getting better, but so are the obstacles, so everything remains balanced.Agreed, you said it better than I did.

Simulations, vs. Game -- I think I see what Rhaudin is saying here. What we are getting down to is playability. This can be independent from whether or not it's a "game" or a "simulation." It may be more difficult to achieve in a simulation, than a game, but I've seen plenty of "games" fail the playability test. For simulations, the best thing to do is to resort to tables. That allows the complicated math to be handled beforehand, so that the players don't have to worry about it. Lately I've been thinking of an old miniatures war game I used to play (Johnny Reb), which was more like a simulation. The rules were quite complex, but once you got the order of play down, everything else was just looking up things in tables. Also the rules made sense and were not counter-intuitive, making them easier to learn.How are you using 'playability'? Tic tac toe is probably one of the easiest games to learn and master. Yet it becomes extremely boring after you've mastered it. Is is very playable because of how easy it is or unplayable because there's no challenge left after you've mastered it?

I see games as having a different purpose than simulations. These purposes may be combined in a single medium on occasion, but one will take precedence over the other. Yet the purpose isn't dependent on complexity. For example, the 'game' of Life (http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) is a very simple simulation at heart.

fusilier
2008-12-16, 12:35 AM
I had an idea I want to toss into the pot: if you are interested in being able to implement extreme probabilities (such as a ~0.005% chance of an archer hitting a very distant target) without otherwise touching system balance, you could use open-ended rolls for those special situations.

. . .

It might be better suited to d100 with '0-9' and '00-90' dice, where a zero result is possible and there won't be gaps in possible results.

Another option-
I've seen "percentile" dice with a 100's and even 1000's numbers on them. So you could certainly handle such small percentages. Only rolling such dice when that level of granularity is required.

Hmm . . . it looks like this place has got them in the "hundreds of thousands!"

http://www.greathallgames.com/