PDA

View Full Version : What's the "sweet spot" for chance of success?



Notafish
2023-04-08, 10:27 AM
I'm curious about this for the purposes of setting DCs/ACs.

Now, I think this is a naturally variable answer - for obviously risky things with a high payoff, I'm fine with a low chance of success. Similarly, most games have other methods for dealing with enemies than simply getting hits past their armor, so variable defenses make the game more rich and varied. But I know that I don't want to play a game where most of my rolls have a worse-than-50% chance of having no effect.

I'd guess that my threshold is probably around 65% for a heroic setting like DnD - as in, I'd likely hope that my attack rolls hit at least 2/3rds of the time on average, or playing a fighter-type will get frustrating. Not a hard and fast rule -it's not like I'm keeping a tally of hits and misses in my games - more of a rule of thumb. I haven't really thought about how this might change in games with partial success chances. Thoughts?

Quertus
2023-04-08, 10:41 AM
I’m a CaW player - the sweet spot for success is 100%, the interesting question is how you get there.

When chance does come into play, there’s no sweet spot - it completely depends on the character and the challenge. And the method used. The Wizard glowing with overflowing magical power might have a harder time convincing the village that he’s harmless than the young lad… until they respond to a puppy - the Wizard by ignoring the swords pointed at them to pet it, the child by ripping it in half and eating it.

Pex
2023-04-08, 03:34 PM
I cannot give a percentage number chance of success as a "sweet spot". Every chance for everything always having that percentage would be boring. I like the progress of getting better. I enjoy what once was hard is now easy. I have a sense of accomplishment of what once I could fail I can now do forever with success. It took time and effort playing with character building choices invested to get the ability of always success in a particular Thing for every possible everything that Thing applies, and I want to enjoy the fruit of that labor. Always succeeding at everything in all ways is also boring, but I am not asking for that. It is perfectly acceptable for this one Thing or a couple of Things for that to apply and still have chance of failure for everything else.

What I can say I do not want is for my success to depend on someone else at the game table allowing it by his own fiat of thought regardless of any build choices I made for my character, accepting the obvious of common sense things that can never be done or cannot fail.

Pauly
2023-04-08, 04:06 PM
It depends on the system and how heroic the setting is. I mostly play systems where the players are slightly better than real humans such as CoC or Traveller.

In those systems I work on a normal person with normal skills and abilities has a 50% chance of performing a basic task under stress.
From there broadly speaking, because the exact numbers will change depending on which skills and abilities are being tested, the difficulty to perform a basic action under stress is roughly:
Untrained* ~ 30%
Basic training ~ 50%
Skilled ~ 65%
Highly skilled ~ 80%
World class expert ~ 95%

Edit to add: Most players most of the time would be testing for something that they are skilled or highly skilled at so 65-80% chance of success as the ‘normal’ test.

* some skills may be gated as being impossible to perform without training. For example in most systems anyone can try to bash down a door, but only those with training can try to pick the lock.

I also assume given unlimited time the players will eventually succeed. Rolling in low stress situations is used if
1) it is needed to keep track of time
2] there is a chance that if they screw up they could break the thing
3) there is a mini quest if they fail (go to the library to research what type of lock it is, buy a specific tool for the job, come back with some dynamite, seduce the manager to get the key)
4) there is a future complication (they forgot to close the door again so the BBEG will find out he’s been burgled).
5) you’re using a you need 3 successes before 3 failures method of storytelling.

False God
2023-04-08, 04:24 PM
I would generally agree that my ideal "difficult but likely to succeed" is about 60-65%. "Difficult and it's a coin toss" would be about 75% of success. "Difficult and you gotta really put some effort in" about 85% and "Oh boy this one's a doozy!" would be 95% or 5%.

EggKookoo
2023-04-08, 06:21 PM
Maybe another question is, what do you consider a "balanced" chance of success? 50%, so the chance of success and the chance of failure are the same? Or is it more like 65% or 75%, with the idea that you should succeed more often than you fail?

False God
2023-04-08, 10:09 PM
Maybe another question is, what do you consider a "balanced" chance of success? 50%, so the chance of success and the chance of failure are the same? Or is it more like 65% or 75%, with the idea that you should succeed more often than you fail?

While we're asking, it's worth asking what "success" and "failure" look like. IMO, the rarer success is, the more dramatic it should be.

Lucas Yew
2023-04-09, 12:00 AM
In 5E terms, I'd say 65% per check with minimal investments on an average challenge appropriate for one's level.

If actually hard invested (such as Expertise), it should be like 85% by Tier 4 (17~20th Levels). Very dejecting if your signature skill still hits like Focus Miss (from Pokémon)...

Anonymouswizard
2023-04-09, 05:39 AM
If it's your character's area of focus, and assuming 'mundane event with possibly interesting consequences'? 80-90%, slightly lower for combat characters. Going down to maybe 70% in your best area if you've intentionally made a jack of all trades. It kind of has to be a range to reflect differing areas of focus.

Notably this suggests the basic difficulty in D&D5e should be DC8. Rogue rescuing a cat from a tree to get on the mayor's good side? DC8 seems reasonable. This should drop to about 60% for actual adventuring stuff, or as low as 30% to 10% for really out there stuff, so standard adventuring DC of 13-15 in 5e.

And of course if it's not interesting just let them do it.

This is, of course, assuming human level PCs. If you're like a Solar Exalted then we should be looking at a 99% success rate for basic adventuring stuff. Thousand year old vampires should probably be somewhere between that.

Vahnavoi
2023-04-09, 09:10 AM
The sweet spots are "none of the time", "half of the time" and "all of the time", because these are odds most people are tentatively equipped to understand. Furthermore, all modifiers to these chances should be large enough that people can be expected to experience and notice them during play. There's actual scientific research on what the thresholds are, but sadly I failed to find a relevant article for now.

Much better known is that humans are bad at thinking in terms of classical probability, with Monty Hall problem and Gambler's fallacy being best known examples. So, when deciding what a game's "sweet spot" is, a question has to be asked: is the game made to appeal to players' worst instincts, or made to exploit them, or are the players genuinely meant to count their odds (etc.)? Are you trying to make a game of chance or a game of skill?

In any case, it is not sufficient to consider individual die rolls; cumulative and non-independent probabilities have to be considered, which involves asking questions like "how many times per game are players expected to roll dice on this?" Roleplaying game design is actively held by back by failures to make these considerations.

PhoenixPhyre
2023-04-09, 10:06 AM
I'm curious about this for the purposes of setting DCs/ACs.

Now, I think this is a naturally variable answer - for obviously risky things with a high payoff, I'm fine with a low chance of success. Similarly, most games have other methods for dealing with enemies than simply getting hits past their armor, so variable defenses make the game more rich and varied. But I know that I don't want to play a game where most of my rolls have a worse-than-50% chance of having no effect.

I'd guess that my threshold is probably around 65% for a heroic setting like DnD - as in, I'd likely hope that my attack rolls hit at least 2/3rds of the time on average, or playing a fighter-type will get frustrating. Not a hard and fast rule -it's not like I'm keeping a tally of hits and misses in my games - more of a rule of thumb. I haven't really thought about how this might change in games with partial success chances. Thoughts?

One suggestive data point--

In 5e, if you start with a +3 in your attack stat and increase it at 4 and 8 (capping it at +5 at that point), then with proficiency your attack bonus is such that if you face a creature of CR = your level each time that's done "by the book", you have a 65% chance of hitting it (hitting on an 8+ on the d20) at every level except 9th, where there's a bobble and you have a 70% chance of hitting it.

Against a "boss" (CR = level +3) it's more variable, but varies between 55% and 65%, averaging 62.25%. Against a "mob" (CR = level / 2), it ranges between 65% and 80%, averaging 75%.

So I'd say 65% is a fair expectation. And that accords with my general take--missing/failing sucks, so it should generally happen a minority of the time. It's part of life, so it has to happen some. If it didn't, we wouldn't need dice in the first place. Of course, this goes for the DM as well--his monsters should be able to actually hit people too. As a matter of personal preference, I much prefer scaling durability based on HP or the equivalent rather than by increasing defenses (on either side). Whiff-whiff-splat makes for really random gameplay. So something like Exalted 2e (IIRC)'s "Perfect Defenses" isn't my preferred style.

For ability checks, for some reason, it's kinda the reverse. Although the sweet spot there is still somewhere between 50% and 60% success. Too much more and I'll just let you auto-succeed (unless the consequences for failure are huge). Too much less and it just gets frustrating as a player. NB as a DM, I don't start giving auto-failure until it's literally impossible to succeed. As in "success here would make absolutely no fictional sense, I can't imagine even a single slim way that approach could work." And since that should be obvious to the characters most of the time, they'll get a "you're pretty sure that won't work for <reasons>. Maybe you should try something else?" warning. So the chances of auto-success (anything where you've got only a slim chance of failure, plus anything where there aren't interesting consequences, plus anything that fictionally doesn't make sense to fail) is much larger than the chances for auto-failure. And the checks are for things somewhere in the middle, around the flat-ish part of the probability curve.

EggKookoo
2023-04-09, 10:38 AM
So I'd say 65% is a fair expectation.

Which is pretty close to 66.67%, so it has the benefit of being internalized as "you hit about twice as often as you miss." Which has nice feelgoods for the player.

PhoenixPhyre
2023-04-09, 10:42 AM
Which is pretty close to 66.67%, so it has the benefit of being internalized as "you hit about twice as often as you miss." Which has nice feelgoods for the player.

Yeah. Unless you have my luck or @KorvinStarmast's luck. Which means "I miss most of the time, even on a 65% chance to hit". :smallmad:

Vahnavoi
2023-04-09, 11:05 AM
2/3 chance to hit is fine for combat, because combat isn't usually decided by a single roll, it's determined by resource depletion. PhoenixPhyre's intuition to scale fights by hit points instead of miss chance is correct for that reason.

Typically, a character who has 2/3 to hit has much higher overall chance to win combat. But use these odds in a context without any resource buffer, and it will look a lot worse, which I suspect to be the basic flaw in many skill systems.

Telok
2023-04-09, 06:59 PM
The sweet spots are "none of the time", "half of the time" and "all of the time", because these are odds most people are tentatively equipped to understand. Furthermore, all modifiers to these chances should be large enough that people can be expected to experience and notice them during play. There's actual scientific research on what the thresholds are, but sadly I failed to find a relevant article for now.

Oh, yeah. I did some checking around some time back on that too. I think I still have a few of the articles saved as pdfs somewhere. There's also a really huge perception effect where seeing the math versus just seeing the results affects how people value stuff. I can dig something up if you'd like.

Like a "reroll on a 1" thing with a d20. Statistically you know it's a minor boost because all it does is change you from rolling a d20 to a d19+1. But people will mostly remember the times when it went from a 1 to a 20 or success, and then put a bunch of emotional value on it. So you could throw tons of "reroll 1s" in something like D&D 5e and there'd be nearly no effect on the actual gameplay math, but people act like they're getting some big boost to stuff.

Vahnavoi
2023-04-10, 07:56 AM
@Telok: Yup, I was fairly sure you brought it up in an earlier thread, but didn't remember enough of the details to track it down. If you do have the reference at hand, sharing it with others in this thread would be a service to common knowledge.

The point about seeing the math versus seeing the results is a very good one. To add to it, when dealing with random effects it's worth asking, "if my players don't/didn't know the odds, how long would they need to play the game to even notice?"

Related, one easy way to cheat one's self with statistics is to give too much weight to some average (or other statistical artifact) that's unlikely or impossible to actually occur. For a super simple example, the average of a d6 roll is 3.5, which is not a result anyone will ever actually roll. It seems like a simple mistake to avoid, but a lot of people fall to it due to the method they use to approximate their odds or expected values, namely, simulating a roll a few thousand times on computer. That sometimes obscures the fact that when making the roll just once, you won't ever see the mean.

Quertus
2023-04-10, 09:05 AM
So, long ago, my brother and I made very different characters on Elder Scrolls: Arena.

When my character attacked, the tempo felt like “miss hit miss miss hit” or “hit miss hit hit miss”. When my brother’s character attacked, the tempo was more “hit crit hit hit crit” or “crit hit crit crit hit”. And that felt right.

So, my Jack of all trades character has a 50% to hit, modified by 10% based on the monster’s stats. My brother’s combat monster character has a 150% chance to hit, modified 10% by monster stats.

But that was for a real-time game, where that 5-attack cadence only took a matter of a few seconds. In a 6-second pen and paper RPG round, if the character only gets one attack/action? Well, 50% success for minimal investment, and 150% success for true specialization doesn’t sound that bad, actually.

Telok
2023-04-10, 10:33 AM
The point about seeing the math versus seeing the results is a very good one. To add to it, when dealing with random effects it's worth asking, "if my players don't/didn't know the odds, how long would they need to play the game to even notice?"

That's actually what started me down that road. Our group started using roll20 and I almost completely stopped looking at hit/check numbers. Then I realized I had no idea if 'reroll 1' or a d20+5 vs d20+1d4 was making any difference. I could calculate the %s in my head but I literally couldn't tell from the results.

Don't have pdfs convenient online but here's a few papers Detecting Regime Shifts: The Causes of Under- And Over-Reaction
Cade Massey and George Wu

Detection of Change in Nonstationary, Random Sequences
DONALD M. BARRY AND GORDON F. PITZ

Detecting Regime Shifts: The Role of Construal Levels on System Neglect
Samuel N. Kirshner

Detection of change in nonstationary binary sequences
JOHN THEIOS and JOHN W. BRELSFORD, JR

Personal conclusions Like with my search for basic perception to make a %chart for game, this search led me back to a set of core research in the '60s & '70s that people keep referencing & building on. Unlike that search these people didn't have a crapton of military peeps available as subjects and had limited budgets*. Also different is the civvy research almost never includes the data at the end of the papers, uses way more dicipline specific jargon, and sometimes doesn't label axis on charts. Label your chart axis ****wad.

So I don't have nice numbers to play with this time, just general observations.

1. People have a really hard time discriminating a 60/40 split.
2. They aren't as good as you'd expect at catching a 70-75/30-25 split or a 5% vs 60% hit rate. For stuff around 70/30 they're overlapping numbers in error bars with the 60/40 results.
3. People are really good at noticing a 90/10 split or 50% vs 100% hit rate.
4. People reliably estimate a 60% rate as a 50% rate because they don't get streaks right, and trying to human produce a 50% "random" set reliably put out a 60% set with too few/short streaks.
5. Personal tendencies towards thinking about stuff in aggregate (all events over time & multiple people/trials) vs individual (only consider my own last 10 trials) has a potentially major difference in perception of the rates & changes when combined with other factors.
6. Minimum 6 to 10 trials in a short amount of time to detect any change, increasing up to 20+ trials for some 60/40 sets. Consistent across several studies. 5 or 6 was to be sure of a 50% vs 100% or 90/10 split, 8 to 10 was for a 70/30 type split.
7. Some people simply couldn't tell a difference in a 60/40 and it was worse if you went closer like 54/46.

My personal generalized conclusions.

A. You need 4+ trials to actually tell any difference in rates, and they need to be fairly close in time. Minutes at most. Anything rolled 1/hour you can't tell except massive massive swings (50%v100% or 90/10) or by direct comparison of numbers ("used to fail this roll on 17- and just rolled a 17=success")
B. You need at least a 20% swing in effect for everyone in the audience** actually tell any difference in rates if you don't have the numbers in your face.
C. Systems that produce memorable success spikes (rerolls or post roll "add another die" type stuff) have a magnifying effect on people noticing rate increases, partially because they require attention & knowledge of the numbers (ref A & B caveats).


* "Do we know how good people are at spotting **** from jets?" "No. Lets send a bunch of guys out to the range in jeeps, trucks, and tanks, then spend two weeks flying fighter jets around at different altitudes to spot them." vs "If I grab 20 students and pay them $5, minus 7 cents per miss, estimate misses as... hmm... ok, yeah, that comes in under budget if I can get my roomate to write the software for a $8 pizza."

** People who know probability (especially stats students) and are looking specifically to identify rate changes notice it more. Everyone else you want to err on the high side.

icefractal
2023-04-10, 03:04 PM
IMO, the less often a roll comes up, the bigger the difference between characters who are "good" and "bad" at it needs to be.

Like, in a typical D&D campaign where you're rolling dozens or hundreds of attacks, a 60% vs 40% chance to hit an average foe is quite noticeable. But if, "Knowledge: Royalty" only comes up twice during the entire campaign, you pretty much want to ensure that the sage does better than the rando both those times.

GloatingSwine
2023-04-11, 07:13 AM
Yeah. Unless you have my luck or @KorvinStarmast's luck. Which means "I miss most of the time, even on a 65% chance to hit". :smallmad:

I have good news about your application to join XCOM.

Easy e
2023-04-11, 02:00 PM
For me, there are a few primary considerations for setting DCs, and they are not linked to rules in anyway.

1. Will failure de-rail the game? If Yes, the DC does not matter too much as the PCs will not fail. I probably will not even make them roll or just have a token roll so skill choice/expertise matters. Bad rolls lead to taking time, partial information, or a potential complication later rather than failure.

Example, they players need to bluff their way past a guard. They failed, and this stumped them for the rest of the session because the Face rolled poorly on a skill check. They eventually made a new plan, but it was a big hassle and distracted from what we were trying to do.

2. Will failure result in something interesting? If failure leads to something interesting happening like new challenges, a twist, or take the game in a new direction. The more interesting the outcome, the more likely I am to grant partial success or outright failure on fair to middling rolls.

Example, the players were trying to hack in a get a floor plan in a restricted computer system. If failed, they would be detected and have to deal with security being aware of a potential breach. The computer roll was not great, but not abyssmal either. Therefore, they got the floor plan they needed but at the cost of raising the security profile a few levels when the infiltration would go down.

3. Will failure break the player out of the game? If you spent time/resources setting up a really smart character that knows stuff, and they fail a history or knowledge check that is really annoying. If their character is good at climbing, let them climb! Therefore, let the experts be experts!

Example, in D&D 5e I had a scholar type cleric who failed a knowledge check on Scarecrow constructs, for the rest of the game; I was convinced they could only be hurt by silver. As a player, I knew better and constantly failing those types of skill checks kind of took me out of the game. Because I rolled bad, my scholar was an idiot for most of the game and could rarely know any historical details.


In summary, I use a completely arbitrary sliding scale of DCs for each character and situation based on the facts at hand. Sometimes, I do not even decide on the DC until the player has rolled and then I decide if it was "good enough". The reactions of the players to the roll often guides my decisions.

I know this approach is not the popular way.

Lilapop
2023-04-11, 02:51 PM
The question is even murkier when you start thinking about what a success is. Is it not missing an attack? Dealing enough damage with it (or, in the case of Warhammer 6th/7th edition, wounding after a hit)? Winning the fight?

Its been mentioned already, but I'll throw in some more examples for why volume matters.

In Warhammer, your attacks first have to hit (melee is very unlikely to get better than 66%, ranged is usually at 33 or 50%), then wound, then the target can attempt an armor save, and then it might also have a ward save or regeneration. That means a single attack often has less than a 10% chance to actually inflict any measurable damage. That would be incredibly frustrating if that was your one moment in the spotlight going around the table, but some of my units make 20 or more attacks in a single turn, so I tend to get at least something and results are generally very close to the average because it is such a tight bellcurve (good batches to hit are often followed by bad woundings, and vice versa).

Meanwhile in World of Warcraft (at least back when I played), a bossfight could last 10 minutes, enough time for a dualwielder to make 600+ autoattacks. These would eventually manage to miss three, four, five times in a row, but you don't really care because they are really just a constant stream of noise. The big attack with a ten second cooldown however wasn't affected by the additional 29% misschance for dualwielding, even when it was executed with both weapons, because it is - just your turn in a TTRPG - in the spotlight. You would very quickly arrange your equipment bonuses such that those attacks wouldn't miss at all... and on something like an arms warrior, where any individual autoattack would be the difference between having generated enough resource for your next special attack or not, such a build would also completely negate autoattack misses.

I think the core difference here is mental weight on various individual rolls, which simply can be all over the place even within the same system.

kyoryu
2023-04-11, 03:48 PM
I think that's a very difficult question, and depends a lot on what your game is, why you fail, and how you mitigate failure.

In Fate, for instance, I prefer a fairly high rate of failure on rolls. This creates opportunities for people to decide whether or not they should invoke their way out of said failure.

In a game where the "real" game is to arrange circumstances to be in your favor, I'd generally want the chance of success to be dependent on how well you managed your situation.

Games where success is more-or-less presumed by the campaign structure should also be handled differently. And I think 'what does failure mean' enters into the discussion, too.

It's a multi-dimensional problem and dependent on enough variables that I don't think there's a single answer.

King of Nowhere
2023-04-11, 05:57 PM
It depends on specifics.
On combat, I agree on the 65% chance to hit, mostly, with caveats. Caveats being that it applies to an average opponent. someone squisjy will be hit more often, someone tanky can be more difficult. A boss characterized as tanky may well be nigh-impossible to hit without substantial buffs. i'm thinking of a 3.5 scenaruo here, where there are many options to buff yourself or debuff an opponent. sometimes the point of a boss is that you need to spend a roubnd or two debuffing him before you can get some good shots.
On skill checks, it really depends on what you are trying to achieve and what's your skill level. an appropriate chance of success can be anything from "automatic success, no need to roll" to "you can't do it".

most important things for me are not the chance of success, but what consequence it has to the game.
on combat, I want to feel challenged. I want to feel that while my character is powerful, he's not the only powerful entity in this campaign world, can't just wave his hands and do whatever he pleases. A boss that's been established with certain traits must show those traits in his fighting. the fight should be suitably hard; it must seem that the boss had some actual chance to win. Else I get the impression that the campaign world is populated by morons.
on skills, I want to feel a consistency between lore and numbers. if my character is an expert in a field, I want him to succeed at average tasks. if my character is not an expert, I don't want to succeed, except maybe occasionally by luck. if a task is supposed to be super difficult, i want it to be.

gbaji
2023-04-11, 06:52 PM
The question is even murkier when you start thinking about what a success is. Is it not missing an attack? Dealing enough damage with it (or, in the case of Warhammer 6th/7th edition, wounding after a hit)? Winning the fight?

This is extremely relevant. What does "success" mean? A lot of the time, when using a skill, it's a binary issue. You either managed to climb that wall, or you didn't. But for combat, it's very dependent on whether you are rolling to cause damage to the opponent (and then "how much"), or whether the roll is just one in a series which determines damage done that round.

I'm also a GM that doesn't specifically scale opposition to the party, but rather to the circumstances. Difficult things are going to be difficult regardless of how powerful/skilled the PCs are. If the PCs are highly skilled then dificult things may be "easy" for them. If they are not so skilled, the moderately difficult things might be "hard". The world doesn't adjust itself to who the players happen to be playing at the moment, so I find the concept of adjusting "difficulty" to some sort of mathematical percentage to be somewhat foreign.

I tend to flip it around the other way. How powerful the PCs are determine what level of difficulty things in my game world they are capable of handling. If a group of low level PCs get it in their heads to go marching up to the Tower of Doom(tm) and attacking the uber powerful bbeg there, they're going to straight up die. If, on the other hand, a highly skilled and powerful group of PCs decide to spend their time whacking the local thieves in some small town, they can easily curb stomp them if they really want to. I'm not going to make street thugs more powerful, or bbeg's less powerful based on the power level of the PCs.

Having said that, if I'm laying out a specific scenario for the players to run through, I absolutely take into account the relative power level and adjust accordingly. If I write something and drop a hook for them to follow, the assumption is that I intend for this specific set of characters they are running to follow that hook where it leads. Part of my job is to make sure that this is something they can handle. Or, if there are elements to it that they cannot, to make that sufficiently obvious to them that they don't try. I don't think I've ever actually thought directly in terms of "average likelihood of success/failure" though. It's a more general thing in terms of broad difficulty/powerlevel of the opppsition.

I think that this is one area that a GM can overthink things. About the only adjustments I make are really broad (want to challenge the party, but not overwhelm them). And to make sure the players themselves are engaged in the game itself. And that latter one really depends on the players, and the specifics of the charaters they are playing. I've also found that this can be more difficult in class based games, since some classes more or less require some specific situations to shine. Some players find other ways to enjoy situations when their characters "special set of skills" don't directly apply. Others are not so great at this. So yeah, sometimes you do have to take that into account. But on the other hand, I'm not going to intentionally place things in my scenario simply because there is a character with a given skill that would be useful there.

Vahnavoi
2023-04-12, 01:42 AM
Don't make things too hard for yourself. Simply, in context of this thread, success means beating a random function. It doesn't matter what that function stands for, same principles apply.

Once you ditch that probabilistic assumption, defining what success is is still easy for any given game, but it may no longer make any sense to talk about "chance". Effectively, you changed topics from this one, to the one in the difficulty thread.

For example, in any determinist game, such as Chess, or any roleplaying game situation where the result does not hinge on independent random chance, "success" is the victory state for whatever game goal is being pursued, such as putting the opponent in a checkmate or making sure you have enough food to survive a trip. Finding these goals is often super easy as a game master or designer since you are literally the one setting them in place, finding them only gets difficult when you're dealing with self-selected implicit goals of players - that is, when players have their own goals that they have not told you. But there is no before-the-fact "chance" to it, no probabilistic spread in the game mechanics. There might be statistical variance to it that can be found after-the-fact, for example, you might find only 50% of players succeed in a scenario, but it should not be taken as any individual player's chance to succeed. You might instead be measuring how many variants of scenario are winnable at all, or how many people are skilled enough to beat it.

So, accept "chance of success" is all about beating random functions. If you want to talk about other forms of success, talk about discreet steps required to achieve game goals, and what is actually required of players to meet them.

King of Nowhere
2023-04-12, 07:36 AM
actually, a better factor to consider is how the player approaches the challenge.
meeting an enemy in a dungeon room after having already suffered a couple fights can be a very challenging combat. ambushing the same enemy after buffing yourself may lead to instant victory without any resource spent. or, a boss fight may be impossible if taken directly, and require effort from the players to make it more manageable. tactics also makes a difference. maybe a fight would be very hard if the party slowly wear down the enemy tank, while it would become a cakewalk if they use the right option to disable/evade the enemy tank and clear up the glass cannons behind first.

And so, the % probability on a dice is a poor indicator. if roleplaying was just reduced to "roll a dice, you need X to win", it would be a very boring game.
I could then rephrase my optimal chance at success like this:

I want to be able to just jump into the fray and throw the dice and win most of the times; I want the feeling that my character is powerful, competent, and capable of brute-forcing most situations.
But! I also require that there will be times when just throwing myself at a problem without preparation will result in (likely) defeat - and I want to have to use my brain to tackle those problems and bring them down to a point where I can brute-force them. Else I'm just a boring invincible hero or mary sue, and few people actually enjoy that.

Grod_The_Giant
2023-04-12, 08:23 AM
This is extremely relevant. What does "success" mean? A lot of the time, when using a skill, it's a binary issue. You either managed to climb that wall, or you didn't. But for combat, it's very dependent on whether you are rolling to cause damage to the opponent (and then "how much"), or whether the roll is just one in a series which determines damage done that round.
Another factor to consider is how long it'll be until you can try again--if you missed with your attack, will your next turn be in ten minutes or sixty? If you failed a skill check, does that mean a small wrinkle in your plan or the loss of an hour's worth of progress?