PDA

View Full Version : RED for real.



PhoenixPhyre
2022-08-04, 11:12 AM
I've talked about RED (Rogue Equivalent Damage) as a touchpoint/unit for measuring outgoing damage before. My initial posting had a bunch of assumptions around it that have since been refined. So this post is an attempt to clarify some of those issues, be more philosophical about what it represents and why I find it useful, and ask for feedback as well as what I should work on next.

What is RED?
RED stands for Rogue Equivalent Damage. It's a unit of measurement for damage, effectively a rescaling of the accuracy-adjusted DPR at a given level. As the name suggests, the rescaling factor is that of a rogue. Specifically, 1.0 RED is defined to be the accuracy-adjusted damage output of
1) a no-subclass (see commentary below) rogue
2) wielding a regular shortbow
3) who always gets sneak attack but never advantage
4) with Dexterity modifiers of +3 at levels 1-3, +4 at levels 4-7, and +5 at level 8+
The accuracy adjustment is included later--all calculated RED values use the same AC values depending on the mode selected.

Why these particular definitions? Mostly due to easy of use and simple scaling. In the simplest accuracy model (100% hit, no crit or miss), it's just counting dice and it scales uniformly every other level. Is it an actual scenario I would expect? No. I'd expect not having sneak attack at least sometimes and having advantage some other times. But as a unit definition, it's relatively arbitrary. However, after doing the work and a lot of comparisons, I have a strong suspicion that it is close to (if not exactly) the model used by the developers during the production of the basic rules, PHB, and MM. Everything else (monster health, monster AC, other basic rules classes' damage output, etc) falls nicely in line in relatively predictable patterns. So as units go, it seems to work well for me.

Why not warlocks? Sure, you could do that. As I said, the choice of unit is fairly arbitrary. But warlocks with EB/AB don't scale as nicely, since they get extra beams irregularly and their modifier goes up out of sync with this.

Why no subclass? Because it started as an exploration of the basic rules, and the only subclass there is the Thief rogue. And the level 17 ability is the main damage feature...and is really annoying to simulate (and involves making assumptions about number of combats that I'd rather not do). So mostly just laziness. No effect below level 17, so...

Why use a rescaling unit at all?
Why not just use (accuracy-adjusted) DPR directly? In large part, because my brain doesn't like big numbers. Graphing things that all increase monotonically (except moon druids who only use bear form, oddly enough) makes picking out signal vs noise harder. And makes averaging things over a span of levels less meaningful. For me, looking at raw DPR numbers raises the question "ok, that thing does 42 DPR. Is that a lot at that level? A little?" Comparing to a known quantity (whatever that might be, as long as it's reasonably sane) solves those issues. The numbers mostly end up between 0 and 2-3-ish RED, which are nice comfortable numbers. Instead of increasing in jumps, things fluctuate, meaning that it's easy to spot trends and outliers, as well as take averages. It also gives a touchpoint--if most simple scenarios end up in the 1.X RED range (for those that are caring about damage at all) and you're in the 4.X range OR the 0.2X range, there's cause for concern and second looks. Or patting yourself on the back for how powerful you are. Your choice. But offhand, I wouldn't know at all how to interpret a specific DPR number, because it includes a hidden level dependence.

Another feature (for me) of the tool is that it allows me to manipulate assumptions. I don't expect this to calculate actual play, but to see things like
* How much of a difference does advantage make?
* If you're completely out of useful spell slots (useful for dealing damage), how bad off are you with just cantrips?
* What is the effect of various assumptions about feats?
* Where are the strong (relatively) levels for different scenarios (such as moon druids)?
* What is a reasonable estimate for the developers' assumptions around damage output as a function of level?

Most of these are known at least to some degree, but having a nice way of comparing them helps me. Here, the RED unit itself isn't as important, but I do find the rescaling helps identify trends. Just like showing some things on a log plot vs a linear one can reveal (or hide!) features. A visualization tool, not an authoritative statement about what is, isn't, or even should be/should not be.

What if RED isn't useful to me?
Great. Fine. Ignore it. It's just a unit of measurement, a tool to be used if you find it useful. Or not if you don't.

But how do I work with RED if I want to?
Two choices--
* You can calculate it yourself. Pick a target AC (of your choice) and do the accuracy adjustment for (1+ sneak attack(level))d6 + mod. Then calculate accuracy-adjusted (using the same target number, preferably) DPR for whatever you're calculating and the RED value is DPR(level) / RED(level). You can calculate it once and store the results for future lookup as well.
* You can use my handy-dandy (if I do say so myself) calculator: https://admiralbenbo.org/red-calculator/calculator.html. Pick from one of the provided accuracy modes, pick one or more of the provided preset scenarios to compare or insert your own custom DPR data (after doing the accuracy adjustment for whatever mode you're in), and let it do the calculations and graphing. For output, you can choose to look at the calculated damage in units of RED, in straight DPR, or look at the accuracy (not including crits, just regular hits) that it thinks that scenario has.

For saving throw modifiers, I used the only data set I had--the average (mean) saving throw modifier (for that save) for monsters of that same CR from the MM, VGtM, and MToF. If you've got a better choice, please do let me know. AC comes from the DMG tables for monster creation.

Provided accuracy modes:
* Default is "boss mode". CR = level + 3. Why 3? That's about what seemed a decent average from Xanathar's guidance on creating solo encounters.
* Half Level mode: CR = level / 2. Why / 2? That's a simplification of some work I did to calculate (based on Xanathar's guidance for group encounters) the median expected CR per level for group encounters. This is basically the "bunch of little stuff" setting.
* Equal Level mode: CR = level.
* Ignore Accuracy mode: 90% hit, 5% crit (base), 5% miss. Factors like advantage/disadvantage are ignored. Extra crit chance is included by shifting the crit up and dropping the hit. This was the default, but it kinda sucks and isn't very reasonable.

Custom data should be in the format [#, ..., #]. There should be between 1 and 20 numbers there, one for each level. If you don't want to provide a number for a given level, replace it with the string null (no quotes). You can cut the array short, but it will interpret that as "display for levels 1 - N" for N input numbers. Null entries will make gaps in the graphed curve.

Wishlist for the calculator
1. I'd like to make it easier to adjust the dials (such as "what % of the time do you have advantage" or "what % of the time does BB proc the extra damage") without having to make new presets and recompile.
2. There are always more presets to add.
3. I'd like to add another output mode based on "Rounds to Kill".
4. I'd like to have an output showing more text about the assumptions behind the selected presets. Because the presets themselves have lots of assumptions made, and the more complex presets have more complex (and debatable) assumptions. Especially when long-rest resources are involved.
5. I'd like to be able to link to specific presets and/or export the graphs/tables in some way.
6. ??

I want to contribute
You can't, go away. No really, the tool is open source. So if you're tech savvy (typescript, node.js, html and css), feel free to fork the repository and submit a pull request. Repo is https://github.com/bentomhall/red-calculator.

If you're not, let me know what you'd like to see. For new presets that aren't trivial, please do supply the underlying assumptions around what resources are used when. Very complicated or multi-source presets might get relatively deprioritized, because the effort involved scales faster than I'd like. I'm especially interested in help with visual design--it's horribly ugly and I'm no designer.

Responses I don't find useful
* RED is bad and not useful to me! Great. Ignore this thread then.

Reach Weapon
2022-08-04, 12:26 PM
RED stands for Rogue Equivalent Damage. It's a unit of measurement for damage [...] defined to be the accuracy-adjusted damage output [...]

I haven't really thought this through, but I am flagging the "accuracy-adjusted" part for your reconsideration, as without it RED is a constant function that can be easily referred to in posts, and aRED could then be used generically where you're writing RED, and something like .65RED or RED(.65) would be pretty obvious.

It might also be good to have a way to notate assumptions added, like being able to easily say we're assuming a short rest every 2 combats of 3 rounds each in a 6 combat day.

Skrum
2022-08-04, 12:37 PM
Great explanation!

Are you able to model limited use abilities, like maneuvers or divine smite? It would take a lot of assumption about how frequently the ability is being spent, but I'd be very curious what that looks like. On the table I play with, due to time constraints, we usually do the "5m adventuring day" - typically 2 very difficult encounters per long rest. Rogues feel quite weak in this format, unsurprisingly. But a number comparison on *how* weak would be cool, and also an idea of what round the rogue catches back up again.

PhoenixPhyre
2022-08-04, 12:53 PM
I haven't really thought this through, but I am flagging the "accuracy-adjusted" part for your reconsideration, as without it RED is a constant function that can be easily referred to in posts, and aRED could then be used generically where you're writing RED, and something like .65RED or RED(.65) would be pretty obvious.

It might also be good to have a way to notate assumptions added, like being able to easily say we're assuming a short rest every 2 combats of 3 rounds each in a 6 combat day.

There's a setting to disable that accuracy adjustment (except including crits, because lots of comparisons don't work right at all if you do). But that also disables advantage (or disadvantage), which means a lot of other effects don't work properly. Including making GWM look a whole lot better, since it would ignore the effects of the -5 part. Half-baking the accuracy adjustment is worse than not doing it at all. And you have to do it a little bit just to make the majority of comparisons meaningful.

I will note that using the CR = level setting means that the accuracy adjustment (assuming no things like -5 or advantage) is a flat 0.65 (except at level 9, where it's 0.7). So that could be the "standard" value -- assume (unless stated otherwise) that attacks hit 65% of the time and crit 5% of the time.


Great explanation!

Are you able to model limited use abilities, like maneuvers or divine smite? It would take a lot of assumption about how frequently the ability is being spent, but I'd be very curious what that looks like. On the table I play with, due to time constraints, we usually do the "5m adventuring day" - typically 2 very difficult encounters per long rest. Rogues feel quite weak in this format, unsurprisingly. But a number comparison on *how* weak would be cool, and also an idea of what round the rogue catches back up again.

I can (and do for a couple presets), but yes. It requires a lot of assumptions. From what I've found, it seems to matter most how many rounds per rest (long or short depending on the feature in question), not how many combats. Except things with durations like rage. Things that recharge on short rests and are once/round are easy (flurry of blows, for instance, or action surge).

So currently the things that depend on long rests are (along with the assumptions I've made):
* Barbarians and rage.
** Case 1: never raging.
** Case 2: always raging (more specifically, raging for all rounds of combat).
** Case 3: raging as many combats as possible, assuming it never drops during combat. 5 fights (a total of 15 rounds) per day.
** Case 4: same as case 3, but also adding frenzy once per day

Plus some variants for non-resource things like reckless and GWM.

* Clerics and Spiritual Weapon.
** Only case so far is the lazy one: assume either 0 or 100% uptime on spiritual weapon. A drastic exaggeration to be sure.

I generally try to start with the boundary conditions--assuming it's up all the time or up none of the time. And then (like I did for fighters and action surge), develop more test cases for points in between that seem rational.

Yakk
2022-08-04, 02:58 PM
Great idea. But the sad part is that it isn't a very useful metric.

A better metric is either (a) assuming TWF rogue (two short swords, no advantage, sneak attack) or (b) starting at level 3, using steady aim every round (hence has advantage every round).

This one results in a damage curve that has, basically, the wrong slope -- the sneak attack damage dice don't scale up as fast as they do in real play. So characters whose damage output becomes really bad at late levels look, in this metric, to have steady ratio compared to the baseline.

While I sort of get Tashas -- not wanting to put that in your balance math -- two short swords was a standard, not fancy, obvious and common rogue build from day 1. (The shortbow with hide-every-turn for advantage was also common in my experience; tashas steady aim usually just got rid of a die roll that had a high chance of success anyhow.)

The effect of two short swords is a small bump from the second swing, but more importantly it makes sneak attack damage more reliable. This moves the slope of rogue damage upwards.

Second, your CR+3 thing ends up being a really complex way to describe a fixed accuracy (65% or 70%) that barely moves. Your model gets a lot more complex -- it requires a lookup table -- for not much return. And we don't have strong evidence that the variation of your model from the fixed accuracy is useful information, or if it is just an artifact of the table you are using. So, extra complexity, with no good case that the complexity increases the quality of the model, isn't good.

...

I like the 2 short sword model myself. It is simple to explain, and doesn't bring up Tashas. If we pick a fixed accuracy (65%), and use sneak attack on the first attack that hits, we deal:

Stat bonus *.65 (first sword)
+ 1d6 * 1.4 (two swords, both can crit)
+ 0.95 * [S]d6 (sneak attack with 88% hit and 7% crit chance)

Every [S] is worth 3.325 damage.
Base weapon damage is 4.9 with 1.95 from Dex bonus (each dex bonus is worth 0.65).

L1-2 this is 10.2 (16 dex, 1d6 sneak)
L3 this is 13.5 (16 dex, 2d6 sneak)
L4 this is 14.2 (18 dex, 2d6 sneak)
L5 this is 17.5 (18 dex, 3d6 sneak)
L8 this is 21.5 (20 dex, 4d6 sneak)
L11 this is 28.1 (20 dex, 6d6 sneak)
L19 this is 41.4 (20 dex, 10d6 sneak)

With just one short sword (or a bow and no advantage) we get:
Stat bonus * .65
+ 1d6 * .7 (crit)
+ .7 * [S]d6

L1-2 this is 6.85 (16 dex, 1d6 sneak)
L3 this is 9.3 (16 dex, 2d6 sneak)
L4 this is 9.95 (18 dex, 2d6 sneak)
L5 this is 12.4 (18 dex, 3d6 sneak)
L8 this is 15.5 (20 dex, 4d6 sneak)
L11 this is 20.4 (20 dex, 6d6 sneak)
L19 this is 30.2 (20 dex, 10d6 sneak)

Every [S] is worth 2.45 damage.
Base weapon damage is 2.45 with 1.95 from Dex bonus (each dex bonus is worth 0.65)

Just picking up a 2nd short sword boosts Rogue damage at level 11 up to nearrly single-short-sword L20 Rogue.

So your metric for RED will report "your damage is fine at level X", when actually you are almost 10 levels behind the curve.

Dork_Forge
2022-08-04, 03:36 PM
Great idea. But the sad part is that it isn't a very useful metric.

A better metric is either (a) assuming TWF rogue (two short swords, no advantage, sneak attack) or (b) starting at level 3, using steady aim every round (hence has advantage every round).

This one results in a damage curve that has, basically, the wrong slope -- the sneak attack damage dice don't scale up as fast as they do in real play. So characters whose damage output becomes really bad at late levels look, in this metric, to have steady ratio compared to the baseline.

While I sort of get Tashas -- not wanting to put that in your balance math -- two short swords was a standard, not fancy, obvious and common rogue build from day 1. (The shortbow with hide-every-turn for advantage was also common in my experience; tashas steady aim usually just got rid of a die roll that had a high chance of success anyhow.)


Whilst it's an easy trick to pull, you can't really claim that either TWF or Steady Aim are baseline for a Rogue, they're no where near popular or consistent enough.

PhoenixPhyre
2022-08-04, 03:52 PM
Great idea. But the sad part is that it isn't a very useful metric.


Warning--see the section titled "what if I don't think it to be useful."

Philosophically, RED isn't a baseline for saying "this is acceptable." It's merely an arbitrary unit to rescale damage numbers. Think of the foot or the meter--as long as we all agree on what to use, the exact definitions don't really matter.



A better metric is either (a) assuming TWF rogue (two short swords, no advantage, sneak attack) or (b) starting at level 3, using steady aim every round (hence has advantage every round).

This one results in a damage curve that has, basically, the wrong slope -- the sneak attack damage dice don't scale up as fast as they do in real play. So characters whose damage output becomes really bad at late levels look, in this metric, to have steady ratio compared to the baseline.

While I sort of get Tashas -- not wanting to put that in your balance math -- two short swords was a standard, not fancy, obvious and common rogue build from day 1. (The shortbow with hide-every-turn for advantage was also common in my experience; tashas steady aim usually just got rid of a die roll that had a high chance of success anyhow.)

The effect of two short swords is a small bump from the second swing, but more importantly it makes sneak attack damage more reliable. This moves the slope of rogue damage upwards.


I strongly disagree with the idea that the baseline should include advantage every turn. Or even almost every turn. Because that represents almost a doubling of the baseline. One that basically no other "normal" basic rules character can meet reliably. Specifically, the shortbow rogue with constant advantage is at (using the above definition) 2.05 RED, with only being substantially below that for levels 1 and 2. Comparatively, a greatsword champion fighter with the GWF style, getting a short rest every 4 rounds is only at 1.75 RED. So I reject the idea that, as a system design assumption, the baseline for "appropriate" damage is constant advantage. Especially with TWF thrown in.

So just using the TWF model, under the stated assumptions about accuracy (see below), you end up at 1.36 (current)RED, starting a bit higher and then sloping downward. That's middle-of-the-pack for DPR-focused scenarios with sane assumptions about rests:

TWF rogue (no advantage): 1.36 RED
GS champion fighter, 1 short rest/9 rounds: 1.43 RED.
SnB champion fighter, 1 short rest/9 rounds: 1.2 RED.
Warlock, EB+AB and 100% hex uptime: 1.21 RED
Monk (no subclass boosts, flurrying as much as possible with 1 SR every 9 rounds): 1.29 RED.

So that could work as a baseline. It's substantially more complex and requires a lot more assumptions and limitations, however. For one thing, it means that if you're a TWF rogue and use your class features such as Cunning Action, you're necessarily falling below the expected curve substantially. By between 25 and 30% on any round you didn't use your TWF attack. It also means that we can't easily compare the effects of changing assumptions around those sorts of things.

And it's actually way more complicated to calculate. Because you have the "did I get sneak attack on my first hit" cross terms. Shortbow RED is simply

let base = (1 + ceil(level/2))*3.5 + modifier.
1 RED = 0.65 * base + 0.05*(2*base - modifier).

Or for varying the assumption around hit rates (see below), simply 1 RED(h,c) = h*base + c*(2*base - modifier). That's trivial to calculate.

From a system design perspective, there are a lot of other evidences that the actual design baseline was 1 attack per round. For example, EB + AB (no hex) ends up pretty darn close to 1 RED. Much closer than to 1 TWF!RED. And follows a nice pattern where it jumps up above 1 RED when you get a new beam, then falls off over time, then jumps up again. Which is exactly what you'd expect. Compared to 1 TWF!RED, it's just consistently "way low". Also the evidence from the monsters (not published yet) is that 1 RED is roughly a consistent 3.x rounds to kill over a huge swath of levels. Which fits really closely to the "calculate average DPR over the best 3 rounds" guidance, and a lot of other guidance.

So I disagree that it's a better model. You're welcome to create your own, but I'm going to stick with what I have.



Second, your CR+3 thing ends up being a really complex way to describe a fixed accuracy (65% or 70%) that barely moves. Your model gets a lot more complex -- it requires a lookup table -- for not much return. And we don't have strong evidence that the variation of your model from the fixed accuracy is useful information, or if it is just an artifact of the table you are using. So, extra complexity, with no good case that the complexity increases the quality of the model, isn't good.


Assuming 65% is identical to switching to the (provided) CR = level mode, except at level 9 (where because proficiency jumps up but the table doesn't change, you end up with a 70% hit rate). Using the table is intrinsically better than just assuming a flat rate, because it leverages what the developers told us they were assuming. Assuming a fixed, static hit rate also means that things like advantage/disadvantage end up affecting everyone identically. Which isn't true at all. It also means you can't easily do the calculation for actual monsters consistently--your unit ends up not scaled the same way as your calculated numbers. Which is just wrong.

It might be wise to set the default mode to CR = level. But letting the user decide what mode to be in does matter, at least for the information I'm trying to extract. I will note that the differences are fairly small between the 3 "sane" modes (ie not including turning off accuracy entirely). TWF goes from 1.36 (CR = level ==> 0.65 hit chance) to 1.38 (CR = level + 3, due to some weirdnesses in how the DMG table scales) to 1.29 (CR = level / 2, again due to differences in scaling). I don't really consider changes in the 2nd decimal substantial. Those changes are almost entirely due to one or two specific levels early on.

[Accuracy data for those 3 cases]:
CR = level / 2: Average accuracy 75. Starts at 65, but by level 5 it's bouncing between 75 and 80.
CR = level: Average accuracy 65.25. 65% except for level 9, where it's 70%.
CR = level + 3: Average accuracy 62.25. Starts at 65^, drops to 55% really early on (level 3), then bounces between 65 and 60%
[/]

Also, assuming a fixed hit rate means that fighters (for instance) are disadvantaged and the costs of picking up a feat are lessened--fighters max their stat at 6, not 8. Which means they have higher accuracy during those levels. And having that data shows interesting things.

And this gets even weirder when you combine something like GWM's -5/+10 and a source of advantage. Because that scales non-linearly (-5 + advantage isn't -0, it's...different). And does nothing for saving throws, which do not have a fixed 65% hit chance. Not even in the slightest. And may vary depending on how you prioritize your modifiers. So you absolutely will always, to do the calculations right, need a table lookup somewhere. So it makes more sense, to me at least, to just use the same mechanism and assumptions for all cases.

DPR is a function of target, including accuracy of the sub-components. Because sacred flame + spiritual weapon has two components with different accuracies that don't move together at all. So the baseline should also be a function of those same target parameters. It's an independent, external assumption.

For any hit rate h, crit rate c, and level L, RED is defined as RED(h, c, L) = h*[(1+L/2)*3.5+modifier(L)] + c*[2*(1 + L/2)*3.5+modifier(L)]. Simple, deterministic, and replicable. And thus the scaled DPR is given by DPR(h, c, L)/RED(h, c, L). Same assumptions on both sides.

MadMusketeer
2022-08-04, 07:25 PM
While I understand it would be hard to implement, multiclassing functionality would be really useful. It would show where particular multiclass builds fall on the damage curve, and when particular builds (or level progressions) 'come online'. It doesn't even need to be super complex or in-depth: you could probably just use the same assumptions (or scaled versions) and combine them. For example, for a Rogue with a 2-level Barbarian dip, you could use just combine your assumptions about Rage, Sneak Attack (at a lower level) and Advantage (Reckless) and you'd have a mostly accurate RED comparison (scaled by level) for the build. This would also work for more complex or resource-focused multiclasses - for a sorcadin, for example, you could just set assumptions about proportion of slots for each level for a Paladin (same or different across levels, either way) and scale that with your sorcerer slots, but put different features at different levels (i.e. you might get Extra Attack later, or never, or you might not multiclass until level 6). For multiclasses assumptions apart from the base classes, you'd probably have to build them in separately (i.e. quickened Booming Blade assumptions), but for a lot of multiclasses that just wouldn't be important. From an input perspective, I'd design it around build progression (i.e. 2 levels Barb then 18 levels Rogue) - that would allow you to see how a build progression does across levels and how it compares to other builds (single class or otherwise).

Gignere
2022-08-04, 07:51 PM
While I understand it would be hard to implement, multiclassing functionality would be really useful. It would show where particular multiclass builds fall on the damage curve, and when particular builds (or level progressions) 'come online'. It doesn't even need to be super complex or in-depth: you could probably just use the same assumptions (or scaled versions) and combine them. For example, for a Rogue with a 2-level Barbarian dip, you could use just combine your assumptions about Rage, Sneak Attack (at a lower level) and Advantage (Reckless) and you'd have a mostly accurate RED comparison (scaled by level) for the build. This would also work for more complex or resource-focused multiclasses - for a sorcadin, for example, you could just set assumptions about proportion of slots for each level for a Paladin (same or different across levels, either way) and scale that with your sorcerer slots, but put different features at different levels (i.e. you might get Extra Attack later, or never, or you might not multiclass until level 6). For multiclasses assumptions apart from the base classes, you'd probably have to build them in separately (i.e. quickened Booming Blade assumptions), but for a lot of multiclasses that just wouldn't be important. From an input perspective, I'd design it around build progression (i.e. 2 levels Barb then 18 levels Rogue) - that would allow you to see how a build progression does across levels and how it compares to other builds (single class or otherwise).

With steady aim it’s not really worth it for rogues to do the barbarian dips anymore unless you’re going for extra attack with barbarian. Why recklessly kill yourself for advantage when you can just steady aim to do so without even giving advantage to the enemies.

PhoenixPhyre
2022-08-04, 08:44 PM
While I understand it would be hard to implement, multiclassing functionality would be really useful. It would show where particular multiclass builds fall on the damage curve, and when particular builds (or level progressions) 'come online'. It doesn't even need to be super complex or in-depth: you could probably just use the same assumptions (or scaled versions) and combine them. For example, for a Rogue with a 2-level Barbarian dip, you could use just combine your assumptions about Rage, Sneak Attack (at a lower level) and Advantage (Reckless) and you'd have a mostly accurate RED comparison (scaled by level) for the build. This would also work for more complex or resource-focused multiclasses - for a sorcadin, for example, you could just set assumptions about proportion of slots for each level for a Paladin (same or different across levels, either way) and scale that with your sorcerer slots, but put different features at different levels (i.e. you might get Extra Attack later, or never, or you might not multiclass until level 6). For multiclasses assumptions apart from the base classes, you'd probably have to build them in separately (i.e. quickened Booming Blade assumptions), but for a lot of multiclasses that just wouldn't be important. From an input perspective, I'd design it around build progression (i.e. 2 levels Barb then 18 levels Rogue) - that would allow you to see how a build progression does across levels and how it compares to other builds (single class or otherwise).

Yeah, this would be a scenario-by-scenario thing. And a ton of work. However, if you want to calculate the DPR yourself, you can plug that into the custom preset to convert to RED and graph it alongside anything else. Or if you want to give me specific scenarios, spelled out, I could build one or two. The code isn't as nice as "just apply sneak attack here" due to things like crits, the influence of 1/turn things, etc.

------

Exploring things--started to implement a "rounds to kill" mode, where Rounds to Kill is defined as (average HP from DMG for a creature of CR X / accuracy-adjusted DPR against a creature of CR x). It's not released yet (it needs more work), but here are a few interesting (to me) trends:

Rogues in all 3 presets so far (baseline/shortbow without advantage, TWF without advantage, Steady Aim/shortbow): fairly constant, just at different levels. Pick your poison/expectations. Note this is solo, so a party of 4 rogues would kill it in 1/4 the time (theoretically).

https://i.ibb.co/DCpFZFC/rounds-to-kill-rogue.png (https://ibb.co/R0yKqK0)


Also of note--a party of 4 baseline rogues would kill an average monster of CR = level in ~3 turns (2.X). Which makes me go "huh. Maybe ~4 RED (+- 1 or so) is what the system expects out of a party, however you get there."

Warlocks, in the 3 presets I have so far (EB without AB/hex, EB + AB, EB + AB + 100% hex uptime): Big jaggies and spikes.

https://i.ibb.co/Lg8fFhx/rounds-to-kill-warlock.png (https://ibb.co/jwTFNZy)


A priori, if I had a smoothly-scaling function like "expected HP", I'd expect that the expected (by the system) party damage output should scale roughly similarly--nice and smooth. Which leads to a roughly constant/flat graph. Which says that the rogue is more like what I'd expect a priori if one of the two was the system designer's touchpoint (in either direction, either monsters -> PCs or PCs -> monsters).

Yakk
2022-08-04, 10:50 PM
Warning--see the section titled "what if I don't think it to be useful."

Philosophically, RED isn't a baseline for saying "this is acceptable." It's merely an arbitrary unit to rescale damage numbers. Think of the foot or the meter--as long as we all agree on what to use, the exact definitions don't really matter.
If it is arbitrary, why not use "10 damage per round"?

I hold it isn't arbitrary. It is an attempt to generate some kind of "baseline damage at a level".

And then I don't find it a good baseline.

I strongly disagree with the idea that the baseline should include advantage every turn. Or even almost every turn. Because that represents almost a doubling of the baseline. One that basically no other "normal" basic rules character can meet reliably. Specifically, the shortbow rogue with constant advantage is at (using the above definition) 2.05 RED, with only being substantially below that for levels 1 and 2. Comparatively, a greatsword champion fighter with the GWF style, getting a short rest every 4 rounds is only at 1.75 RED. So I reject the idea that, as a system design assumption, the baseline for "appropriate" damage is constant advantage. Especially with TWF thrown in.
Wait, what? 2.05 RED with advantage? That makes zero sense.

Advantage cannot be 2.05x damage without advantage. It is worse than attacking twice in almost every situation.

SB with advantage, 65% hit 5% crit.
88% hit, 10% crit.

Suppose level 5. 3d6 sneak @ .98, 1d6 weapon @ .98, 4 dex @.88 is 17.24 damage.

Without advantage it is 65% hit 5% crit for .7 * 4d6 + .65 * 4 = 12.4 damage.

No 2.05 at all. 1.39 x no advantage damage, not 2.05x. I can't imagine a situation 2.05x damage can come from advantage here.

So just using the TWF model, under the stated assumptions about accuracy (see below), you end up at 1.36 (current)RED, starting a bit higher and then sloping downward. That's middle-of-the-pack for DPR-focused scenarios with sane assumptions about rests:

TWF rogue (no advantage): 1.36 RED
GS champion fighter, 1 short rest/9 rounds: 1.43 RED.
SnB champion fighter, 1 short rest/9 rounds: 1.2 RED.
Warlock, EB+AB and 100% hex uptime: 1.21 RED
Monk (no subclass boosts, flurrying as much as possible with 1 SR every 9 rounds): 1.29 RED.

So that could work as a baseline. It's substantially more complex and requires a lot more assumptions and limitations, however. For one thing, it means that if you're a TWF rogue and use your class features such as Cunning Action, you're necessarily falling below the expected curve substantially. By between 25 and 30% on any round you didn't use your TWF attack. It also means that we can't easily compare the effects of changing assumptions around those sorts of things.
Yes, the price of using Cunning Action is less damage. In fact, you can work out the damage you lose; it is even contingent on if your first attack hits.

This is similar to the price of a monk using step, or even stunning strike; when they do so, they lose out on Flurries (at least up to a certain level).


And it's actually way more complicated to calculate.
You do it once.

Because you have the "did I get sneak attack on my first hit" cross terms. Shortbow RED is simply

let base = (1 + ceil(level/2))*3.5 + modifier.
1 RED = 0.65 * base + 0.05*(2*base - modifier).
It is best to split it into scenarios and damage types, then recombine.

So you have the hit, crit and miss scenarios.

And you have weapon damage dice, static damage and sneak attack damage dice.

For SB+Advantage, all damage dice get combined, and multiplies by (hit+crit).

For SS+TWF, it does get more complex.


From a system design perspective, there are a lot of other evidences that the actual design baseline was 1 attack per round. For example, EB + AB (no hex) ends up pretty darn close to 1 RED. Much closer than to 1 TWF!RED. And follows a nice pattern where it jumps up above 1 RED when you get a new beam, then falls off over time, then jumps up again. Which is exactly what you'd expect. Compared to 1 TWF!RED, it's just consistently "way low". Also the evidence from the monsters (not published yet) is that 1 RED is roughly a consistent 3.x rounds to kill over a huge swath of levels. Which fits really closely to the "calculate average DPR over the best 3 rounds" guidance, and a lot of other guidance.
But in the Warlock power bud EB+AB is short short rest spell slots.

Rogues damage budget is their at-will damage. They would be screwing up if Warlock at-will damage matched rogue at-will damage.


Assuming 65% is identical to switching to the (provided) CR = level mode, except at level 9 (where because proficiency jumps up but the table doesn't change, you end up with a 70% hit rate). Using the table is intrinsically better than just assuming a flat rate, because it leverages what the developers told us they were assuming. Assuming a fixed, static hit rate also means that things like advantage/disadvantage end up affecting everyone identically. Which isn't true at all. It also means you can't easily do the calculation for actual monsters consistently--your unit ends up not scaled the same way as your calculated numbers. Which is just wrong.
The monster CR calculating table in the DMG is not "what we expect monsters to have", it is a simplified version of the calculations they have in a spreadsheet they used to calculate CR. (I've seen a quote by a D&D developer about that)

Using it, or even using AC of monsters from the MM by CR, isn't going to model anything particularly great; a good model is going to be insanely complex. I mean, you should really weigh monster AC by the threat and HP of the monster and the threat of the entire encounter. (Imagine if most encounters are zero threat but have AC Y; that AC fundamentally doesn't matter, as you don't care if you hit or miss.)

So stepping back from the table, you'll note it has a relatively flat accuracy rate. Instead of using a table whose quirks are not known, but is known not to reflect actual monsters encountered, simply have a simpler assumption.

This "simpler" is not "the math is easier" or "the justification is simpler", but the explanation of what is going on is simpler.

You are going to make mistakes. Having complex assumptions makes it harder for people to verify your work, which makes your work less reliable and hence less useful. As an example, your shortbow+advantage being 2.05x times a shortbow without advantage; a mistake. Without simply stated assumptions, I had to invent a bunch of assumptions in order to figure out what you did wrong, and I still can't be sure.

Meanwhile, with simply stated assumptions, it is easy to verify or error-check your work.

But you do you.

PhoenixPhyre
2022-08-04, 11:44 PM
If it is arbitrary, why not use "10 damage per round"?

I hold it isn't arbitrary. It is an attempt to generate some kind of "baseline damage at a level".

And then I don't find it a good baseline.


For reasons that don't actually jibe with the rest of the system.

And no, it's not a baseline damage at a level as such. It's not an assumption that if you're dealing less than 1 RED you're sub-par. It's an investigation into the underlying assumptions behind the system. A tool for investigating these things in a way that tries to reveal the underlying dials and gears. For example, things like how does the order in which you take different feats change things? How relevant are small changes in modifiers (which note, would screw up badly if you assume some constant hit rate). What damage output were the DMG guidelines calculated against? How do those compare to real monsters?

Sure, you could do any of this without RED as a unit. You could do everything in raw DPR. But, as I said, having some unit that scales smoothly with level for normalization makes trends and differences much clearer. Just like you could do climate calculations in absolute temperatures. Or quantum mechanical calculations in straight SI units. But they don't. They rescale the units so the calculations and trends are much clearer. That's all RED is, and yes. You could just go with some arbitrary function of level. I contend you'd lose most of the use, just like if you weighed everything in units of "this apple I found on the street the other day."

RED, as stated, seems (based on a fair bit of investigation) to be a reasonable scaling unit. It behaves well. It's accurate enough. The underlying assumptions (contra) are quite clear. Because the underlying assumptions don't actually care what the hit rate is, other than that you do it the same way for everything. Those are external assumptions. And one of the parameters under test here. Specifically, what set of accuracy parameters makes the most sense based on the effects on the data? It's something I'm modeling, not a modeling assumption. Independent parameter.



Wait, what? 2.05 RED with advantage? That makes zero sense.

Advantage cannot be 2.05x damage without advantage. It is worse than attacking twice in almost every situation.

SB with advantage, 65% hit 5% crit.
88% hit, 10% crit.

Suppose level 5. 3d6 sneak @ .98, 1d6 weapon @ .98, 4 dex @.88 is 17.24 damage.

Without advantage it is 65% hit 5% crit for .7 * 4d6 + .65 * 4 = 12.4 damage.

No 2.05 at all. 1.39 x no advantage damage, not 2.05x. I can't imagine a situation 2.05x damage can come from advantage here.



I can check the numbers on that.

Checking...yeah. There was an error. Not in calculating advantage or anything, but missing a parentheses so some of the flat-accuracy damage (included in the calculation because I do want to be able to vary from 100% advantage to some combination of advantage, disadvantage, and flat) was leaking through instead of being multiplied by zero as it should. Fixing that does bring that case down to (averaging) 1.43 RED. Thanks for that notice.

As a side note, that's one reason I didn't want to use advantage in the unit itself (instead of as one parameter I can model against). Because it is more complicated and has to be handled separately when you start considering more than one attack with 1/round things (like sneak attack). The baseline should be the simplest reasonable assumption IMO. Both of those words are important--TWF is much more complicated (code wise) than shortbow only; advantage adds a whole nother layer of oddness, and a fixed, constant hit chance is not (IMO) a reasonable assumption for the work I want to do or the tests I want to run with this tool.



Yes, the price of using Cunning Action is less damage. In fact, you can work out the damage you lose; it is even contingent on if your first attack hits.

This is similar to the price of a monk using step, or even stunning strike; when they do so, they lose out on Flurries (at least up to a certain level).


But the system's baseline assumption cannot (rationally) be that it expects you to never use your features, because otherwise you're behind the curve. It has to assume that sometimes a bonus action is just that. A bonus. Not a "must have to stay competitive". Because that's not how it works, and that's been explained exactly as such by JC in various dragon talk podcasts. The system never assumes that you use your bonus action on a regular basis, except to activate long-duration things like Rage. The explicit statement was "if you don't have a bonus action to do, or don't choose to, that's fine. Bonus actions are bonuses."

That is, monsters cannot (rationally) have been calculated based on assumptions that the basic rules/classic party (which includes a rogue) is always able to TWF. And getting at that real set of assumptions, answering that "what assumptions did they make when they made monsters" question is at the core of why I'm building these tools in the first place. Not to compare builds or flex my optimization muscles. Because those things don't interest me at all. They're pointless (because they're comparing against a non-fixed target). I'm interested in questions around the system itself, which does have fixed targets. And fixed assumptions. The task is to bring those to life by modeling different scenarios and seeing what matches the facts on the ground.



You do it once.


For every possible accuracy combination. Every combination of possible targets. Because yes, you do have to consider targets. Without that, most of the analysis (such as the rounds to kill, inclusion of anything involving non-AC defenses or modifiers other than some standard, looking at how it changes when you change that parameter) becomes at best meaningless. The scaling factor must be set using the same standards as the thing being measured, and that involves assumptions that you must make about defensive parameters of the targets.

This is a hill I'm willing to die on--any assumption of a flat hit rate is wrong. It models nothing at all other than a training dummy. And doesn't give any interesting results worth actually calculating about the system itself, because effects of accuracy are
a) different for different scenarios
b) a parameter that I'd really really really like to study
c) NOT CONSTANT in most of the scenarios I care about.



But in the Warlock power bud EB+AB is short short rest spell slots.

Rogues damage budget is their at-will damage. They would be screwing up if Warlock at-will damage matched rogue at-will damage.


Funny thing...warlocks (at least in the PHB) don't exactly have good blasting spells. And that's a well-known thing--most of their damage does come from EB+AB. And if you stack on 100% hex uptime...you get somewhere around a TWF rogue without advantage.



The monster CR calculating table in the DMG is not "what we expect monsters to have", it is a simplified version of the calculations they have in a spreadsheet they used to calculate CR. (I've seen a quote by a D&D developer about that)

Using it, or even using AC of monsters from the MM by CR, isn't going to model anything particularly great; a good model is going to be insanely complex. I mean, you should really weigh monster AC by the threat and HP of the monster and the threat of the entire encounter. (Imagine if most encounters are zero threat but have AC Y; that AC fundamentally doesn't matter, as you don't care if you hit or miss.)

So stepping back from the table, you'll note it has a relatively flat accuracy rate. Instead of using a table whose quirks are not known, but is known not to reflect actual monsters encountered, simply have a simpler assumption.

This "simpler" is not "the math is easier" or "the justification is simpler", but the explanation of what is going on is simpler.

You are going to make mistakes. Having complex assumptions makes it harder for people to verify your work, which makes your work less reliable and hence less useful. As an example, your shortbow+advantage being 2.05x times a shortbow without advantage; a mistake. Without simply stated assumptions, I had to invent a bunch of assumptions in order to figure out what you did wrong, and I still can't be sure.

Meanwhile, with simply stated assumptions, it is easy to verify or error-check your work.

But you do you.

Actually...looking at real monsters (3 whole books worth, including the MM which is based on the stuff from the basic rules/PHB)...15 of the CRs between 1 and 20 have average ACs exactly what the table states. 3 have ACs one higher (3, 7, 18). One has AC 1 lower (13). That's pretty darn close for my purposes. And the standard deviations aren't huge either (except the few levels that just don't have many monsters at all).

And you have to consider CR (or actual monster numbers) in this if you want to do anything with
a) save-based spells. Because assuming 65% there is just flat wrong for published monsters. Assuming any specific, flat number is flat wrong. Especially when looking at WIS vs DEX saves. Or heaven forfend INT vs CON saves.
b) monster health (rounds to kill).
c) comparing anything that provides both advantage or disadvantage and a penalty/bonus to hit. Because then you're scaling the numbers in all sorts of weird non-linear ways.

Witty Username
2022-08-08, 01:55 AM
I still think it is odd to assume sneak attack but not advantage, as the two are pretty closely linked in how often they will come up for a rogue. Admittedly this may be a bias, my play groups rather like ranged characters so multiple PCs in melee is something of an oddity.

Hm, given that point from JC, is it a system assumption that monk should be able to maintain effectiveness without using their martial arts attack? It's a thought I have been pondering with this RED stuff, because monk isn't going using the martial arts attack all the time (step of the wind and patent defense will at least some of the time take priority).

PhoenixPhyre
2022-08-08, 09:14 AM
I still think it is odd to assume sneak attack but not advantage, as the two are pretty closely linked in how often they will come up for a rogue. Admittedly this may be a bias, my play groups rather like ranged characters so multiple PCs in melee is something of an oddity.

Hm, given that point from JC, is it a system assumption that monk should be able to maintain effectiveness without using their martial arts attack? It's a thought I have been pondering with this RED stuff, because monk isn't going using the martial arts attack all the time (step of the wind and patent defense will at least some of the time take priority).

As for sneak attack, I'm fairly sure that advantage is the secondary way of generating it and that assuming at least one melee ally is fairly safe. Given that the pregen characters from WotC have the fighters, paladins, clerics, and barbarians all as melee. The ranged superiority thing is somewhat of a forum-generated meme (in the original sense).

So far, my numbers suggest that a monk who flurries is far enough above 1 RED (especially when you consider advantage from stun or damage from other subclass features) that those times they don't MA are covered. Just like most rogues will have advantage some of the time, so the times they don't get sneak attack are covered and it all averages out.

animorte
2022-08-08, 09:23 AM
In its absolute most basic form, have I been looking at this correctly?

RED = Sneak Attack + Dex mod + weapon damage

PhoenixPhyre
2022-08-08, 09:43 AM
In its absolute most basic form, have I been looking at this correctly?

RED = Sneak Attack + Dex mod + weapon damage

Yes, in the "accuracy doesn't matter and neither do crits" regime. Which was my initial take on it, due to the sheer simplicity.


As for sneak attack, I'm fairly sure that advantage is the secondary way of generating it and that assuming at least one melee ally is fairly safe. Given that the pregen characters from WotC have the fighters, paladins, clerics, and barbarians all as melee. The ranged superiority thing is somewhat of a forum-generated meme (in the original sense).

So far, my numbers suggest that a monk who flurries is far enough above 1 RED (especially when you consider advantage from stun or damage from other subclass features) that those times they don't MA are covered. Just like most rogues will have advantage some of the time, so the times they don't get sneak attack are covered and it all averages out.

And, thinking about it more, one of the questions I'd like to answer is along the lines of

"how often can a monk get away with not flurrying in order to use his other abilities?" or "how much does advantage offset those other times when the rogue doesn't get sneak attack?"

More generally, RED is a comparison unit, not a benchmark. The actual benchmark seems (broadly) to be party damage output. Specifically, if your party is doing ~4 RED (with substantial error bars) most of the time, they'll kill a "level appropriate" monster in roughly 3 turns. But that allows the GS-wielding champion fighter and TWF rogue (who sometimes gets advantage), both at ~1.5 RED, to compensate for the control wizard and the buff/heal-bot cleric, who are doing cantrip damage most of the time at ~0.5 RED. Or the times when the wizard can aoe the hoard of goblins makes up for the times the rogue had disadvantage and didn't get sneak attack at all or the fighter couldn't get into melee with the flying things. So it's not a point comparison, it's an average over a bunch of possible encounters. That's one reason I shy away from doing build comparisons--it's a team game, and damage isn't the only thing. Someone doing 0.5 RED (over the course of an encounter) isn't bad as long as they're doing something else of value to the party. But it does mean the rest have to do more to compensate. Not a ton more, but some more.

x3n0n
2022-08-08, 10:06 AM
First, thanks for doing this!

Second, I agree with others that a Rogue that prioritizes being "effective" in combat "should" have advantage most of the time starting at level 2 or 3 (with some combination of Cunning Action (Hide), Steady Aim, and party support), but I agree that RED as-is is just fine to use as an easy-to-calculate scaling factor, especially since advantage-Rogue is basically 1.39*RED_{CR=level} (according to the version in my browser). That is:

Would-be users, if you would prefer to have graphical output that lets you ignore the non-advantage Rogue and compare the (now-fixed) advantage-Rogue to whatever presets...just do that! If you want numerical output, you may need to do your own scaling, but that's pretty trivial.

UI/feature thoughts:

IMO, CR=level (as you said, basically 0.65 accuracy/"hit on an 8" assuming "normal" progression and no power attack) seems like the most-preferred default accuracy mode.
An actual "0.65 pre-power-attack" accuracy mode might be nice just for legibility. As you said, that's basically CR=level with the exception at character level 9.
For complicated presets, maybe you can hide a detailed description inside a tooltip/hover or behind a toggle? That would let you maintain and possibly add additional progression info while reducing always-on visual clutter.
Fun Champion Fighter presets: "Archery (Longbow)", "VHuman (Crossbow Expert), Archery (Hand Crossbow)", and "VHuman (CBE), Arch (HC), SS@4 (always Risky Shot)" seem like interesting comparisons with the corresponding GWF, PAM, and GWM presets.

PhoenixPhyre
2022-08-08, 10:14 AM
First, thanks for doing this!

Second, I agree with others that a Rogue that prioritizes being "effective" in combat "should" have advantage most of the time starting at level 2 or 3 (with some combination of Cunning Action (Hide), Steady Aim, and party support), but I agree that RED as-is is just fine to use as an easy-to-calculate scaling factor, especially since advantage-Rogue is basically 1.39*RED_{CR=level} (according to the version in my browser). That is:

Would-be users, if you would prefer to have graphical output that lets you ignore the non-advantage Rogue and compare the (now-fixed) advantage-Rogue to whatever presets...just do that! If you want numerical output, you may need to do your own scaling, but that's pretty trivial.

UI/feature thoughts:

IMO, CR=level (as you said, basically 0.65 accuracy/"hit on an 8" assuming "normal" progression and no power attack) seems like the most-preferred default accuracy mode.
An actual "0.65 pre-power-attack" accuracy mode might be nice just for legibility. As you said, that's basically CR=level with the exception at character level 9.
For complicated presets, maybe you can hide a detailed description inside a tooltip/hover or behind a toggle? That would let you maintain and possibly add additional progression info while reducing always-on visual clutter.
Fun Champion Fighter presets: "Archery (Longbow)", "VHuman (Crossbow Expert), Archery (Hand Crossbow)", and "VHuman (CBE), Arch (HC), SS@4 (always Risky Shot)" seem like interesting comparisons with the corresponding GWF, PAM, and GWM presets.


As for advantage, I think I want to be able to compose things. So I could do stuff like "50% advantage, 25% regular, 25% no sneak attack but flat accuracy" by doing a weighted average of three presets instead of having to special-case everything. I also think that Steady Aim is an abomination that should never have been implemented and is pure power creep, because it devalues a lot of advantage-granting class features and spells. So there's that. And basing the core scaling unit off of something that came as an optional feature 5+ years later...it's fairly clear that the MM wasn't developed with "rogue always gets advantage" in mind. The numbers just don't add up that way.

1. Yeah, I think I'll switch that to the default.
2. I'm not sure how much that adds, since, as you say, it only affects level 9, but maybe.
3. Yeah. I need to find a better way than just shoving the details into the title. Maybe a sidebar with all the details for the selected ones? I am also going to do a "deselect all" button...
4. Yeah. Those seem like reasonable presets. More fighter presets is contingent on #3, because that list is already huge. Oh I wish I were a graphic designer....Maybe use expandable elements per class?

x3n0n
2022-08-08, 10:32 AM
As for advantage, I think I want to be able to compose things. So I could do stuff like "50% advantage, 25% regular, 25% no sneak attack but flat accuracy" by doing a weighted average of three presets instead of having to special-case everything.

For Rogue in particular, all of my value is in having the proportionless presets, and I expect to need to combine them myself.
That said, I would like to have a "without Sneak Attack" preset (and maybe "TWF without Sneak Attack"). Those are also reasonable proxies for "vanilla 5E character with d6 weapon proficiency but without any additional combat feature".

Thunderous Mojo
2022-08-08, 10:34 AM
As for sneak attack, I'm fairly sure that advantage is the secondary way of generating it

I would not agree with this. My experience has been Rogues generate quite a bit of Advantage from being Unseen, (aka Hiding).


and that assuming at least one melee ally is fairly safe. Given that the pregen characters from WotC have the fighters, paladins, clerics, and barbarians all as melee.

That is an incorrect assumption. The Champion Fighter from one of the earlier Starter sets begins play with a Higher Dexterity Score than Strength Score and has the Archery Fighting Style.

I know the DM was using elements from both the Starter Set and Essentials Kit, but the Manticore encounter, and Goblinoid Bandit Ambush and Lair Encounter were the first two encounters that the party elected to have, and both Encounters were very encouraging for Ranged Combatants.

It is a presumption to assume that WotC has a play prescription in mind that Fighters be melee over ranged. Again, the pre-generated Fighter I have seen, appears to have been built to be something of a Generalist, capable of using both Ranged and Melee attacks, while being more accurate with Ranged attacks.



The ranged superiority thing is somewhat of a forum-generated meme (in the original sense).

So, it would seem to me, that you might be too quick to dismiss the play experience from actual people, (for after all Forum posts are written by persons, and not some abstract conception of a meme-hood), in favor of your own presumptions of how play should occur.

Numerous people have shared their experiences that Ranged combat often allows the Adventuring Party to kill or significantly wound something, while limiting the amount of damage sustained by the party.

History and current events, are replete with examples of how militarily advantageous it is to be able to attack your foe at a distance, without being attacked in return.

When I was a child, during 1e AD&D, every PC would try to carry Oil Flasks and a 10’ pole because getting close to things often resulted in your death.

A 1e Fighter’s 3 starting weapon proficiencies would often be divided between:

Ranged Weapon
Melee Weapon that had a damage die increase vs Large+ creatures
Weapon that could be used in close quarters


The value of Ranged Combat, in a purely D&D sense, has been a known quantity for almost 50 years now. This is not just some: “forum-generated meme”

Your figurative view, strike me as out of focused.

PhoenixPhyre
2022-08-08, 10:46 AM
For Rogue in particular, all of my value is in having the proportionless presets, and I expect to need to combine them myself.
That said, I would like to have a "without Sneak Attack" preset (and maybe "TWF without Sneak Attack"). Those are also reasonable proxies for "vanilla 5E character with d6 weapon proficiency but without any additional combat feature".

Yeah, I think I'll add a "no sneak attack" and a "twf without sneak attack" setting. Whether I do the compound ones publicly or only in my own hypothesis testing branches is undecided.


I would not agree with this. My experience has been Rogues generate quite a bit of Advantage from being Unseen, (aka Hiding).

That is an incorrect assumption. The Champion Fighter from one of the earlier Starter sets begins play with a Higher Dexterity Score than Strength Score and has the Archery Fighting Style.

I know the DM was using elements from both the Starter Set and Essentials Kit, but the Manticore encounter, and Goblinoid Bandit Ambush and Lair Encounter were the first two encounters that the party elected to have, and both Encounters were very encouraging for Ranged Combatants.

It is a presumption to assume that WotC has a play prescription in mind that Fighters be melee over ranged. Again, the pre-generated Fighter I have seen, appears to have been built to be something of a Generalist, capable of using both Ranged and Melee attacks, while being more accurate with Ranged attacks.

So, it would seem to me, that you might be too quick to dismiss the play experience from actual people, (for after all Forum posts are written by persons, and not some abstract conception of a meme-hood), in favor of your own presumptions of how play should occur.

Numerous people have shared their experiences that Ranged combat often allows the Adventuring Party to kill or significantly wound something, while limiting the amount of damage sustained by the party.

History and current events, are replete with examples of how militarily advantageous it is to be able to attack your foe at a distance, without being attacked in return.

When I was a child, during 1e AD&D, every PC would try to carry Oil Flasks and a 10’ pole because getting close to things often resulted in your death.

A 1e Fighter’s 3 starting weapon proficiencies would often be divided between:

Ranged Weapon
Melee Weapon that had a damage die increase vs Large+ creatures
Weapon that could be used in close quarters


The value of Ranged Combat, in a purely D&D sense, has been a known quantity for almost 50 years now. This is not just some: “forum-generated meme”

Your figurative view, strike me as out of focused.

Some advantage, sure. That's reasonable. But always? And never sneak attack without advantage? I doubt that. And that's the presumption I was speaking against. If rogues were (hypothetically) designed to always have advantage, they'd have that built into the class in ways that weren't failable. Hiding only works if there's cover AND you roll well enough AND are at range (because hiding in melee doesn't work at all by default). That's a lot of conditionals for something that's being presumed to always happen (or at least happen in across the board in a super-majority of cases). And the presumption that a rogue requires sneak attack to perform at the baseline means that definitionally a rogue can't do any better than that. They can only do worse. Which seems...really really off.

Yes, ranged combat has always been known to be strong. But the idea that you can meaningfully do an all ranged, no melee party is somewhat of a forum-generated meme. It started a while ago, to be sure, but it's not the default. Because presuming that that is actual design assumption means that they deliberately made two classes that were (in this hypothetical) useless. Paladins and barbarians have no default path to being good at ranged combat. Yet we all know that paladins (at least) are very effective. And clerics have only (by default) short-ranged stuff (30' or less for most of it). They don't have (by default) good ranged blasting spells; their cantrips are short-ranged, and they have heavier armor (suggesting melee). That starter set is explainable by having a cleric as a front-liner. And individual combats are, rather on purpose, designed to show off various strengths and weaknesses. Cherry-picking a few is rather bad conversational tactics.

x3n0n
2022-08-08, 11:09 AM
Yeah, I think I'll add a "no sneak attack" and a "twf without sneak attack" setting.

Cool!

Another fun (?) Rogue preset: advantage with Elven Accuracy.

Dork_Forge
2022-08-08, 11:17 AM
Some advantage, sure. That's reasonable. But always? And never sneak attack without advantage? I doubt that. And that's the presumption I was speaking against. If rogues were (hypothetically) designed to always have advantage, they'd have that built into the class in ways that weren't failable. Hiding only works if there's cover AND you roll well enough AND are at range (because hiding in melee doesn't work at all by default). That's a lot of conditionals for something that's being presumed to always happen (or at least happen in across the board in a super-majority of cases). And the presumption that a rogue requires sneak attack to perform at the baseline means that definitionally a rogue can't do any better than that. They can only do worse. Which seems...really really off.


It also assumes no factors that impose disadvantage, which will probably come up at least intermittently.

PhoenixPhyre
2022-08-08, 12:01 PM
It also assumes no factors that impose disadvantage, which will probably come up at least intermittently.

Yeah.

------

In other news, I just pushed a new version that includes
* Two new rogue presets (Shortbow no sneak attack and TWF no sneak attack)
* making the CR = level (65%) preset the default
* Adding a couple (very basic) wizard and sorcerer presets. Basically different cantrip assumptions (quicken firebolt/firebolt for sorcerer vs no quicken, evocation vs non-evocation firebolt, a few different Bladesinger options).
* Lays the groundwork for more classes and more modularity to simplify my life.
* Fixes mercy monk. Not sure if my previous published version was borked, but at least one of the working versions was. Horribly, in weird ways. Still don't trust the preset for astral monk, but haven't found what's causing that mistrust.
* Cache busting so that hopefully new changes won't require as much manual fiddling to show up.

x3n0n
2022-08-08, 12:52 PM
The actual benchmark seems (broadly) to be party damage output. Specifically, if your party is doing ~4 RED (with substantial error bars) most of the time, they'll kill a "level appropriate" monster in roughly 3 turns.

And in those 3 rounds, the monster deals roughly 3 times its DPR to the party. IIRC, you have mentioned your dislike for monsters with badly unbalanced "defensive CR" and "offensive CR". It feels like this helps explain why; if monster DPR is high, but "expected rounds of monster damage" is lower (1 or 2), "monster total damage dealt" has a much much higher variance. "Monster rolled high initiative" adds a full "mDPR", and "players rolled poorly on their early attacks" adds another "mDPR"; conversely, "monster rolled bad initiative" plus "players rolled well" might take "rounds of monster damage" all the way down to 0.

That framing also helps (in VERY broad strokes) explain what a given PC's contribution is for a given combat encounter.

Many PCs spend their combat actions on dealing damage--the sooner a monster is KOed, the fewer rounds it deals its DPR.
Other PC actions (like web or grappling/shoving) directly inhibit the monster's damage-dealing capability and/or increase the damage output of the party's damage dealers.
Life Clerics (and Twilight Clerics and summoners) can resource-efficiently negate "damage to the party's PCs" into something that doesn't matter.

PhoenixPhyre
2022-08-08, 01:33 PM
And in those 3 rounds, the monster deals roughly 3 times its DPR to the party. IIRC, you have mentioned your dislike for monsters with badly unbalanced "defensive CR" and "offensive CR". It feels like this helps explain why; if monster DPR is high, but "expected rounds of monster damage" is lower (1 or 2), "monster total damage dealt" has a much much higher variance. "Monster rolled high initiative" adds a full "mDPR", and "players rolled poorly on their early attacks" adds another "mDPR"; conversely, "monster rolled bad initiative" plus "players rolled well" might take "rounds of monster damage" all the way down to 0.

That framing also helps (in VERY broad strokes) explain what a given PC's contribution is for a given combat encounter.

Many PCs spend their combat actions on dealing damage--the sooner a monster is KOed, the fewer rounds it deals its DPR.
Other PC actions (like web or grappling/shoving) directly inhibit the monster's damage-dealing capability and/or increase the damage output of the party's damage dealers.
Life Clerics (and Twilight Clerics and summoners) can resource-efficiently negate "damage to the party's PCs" into something that doesn't matter.

Yeah. This is a good additional framing IMO.

The bold is something that has happened to me quite a lot recently (or nearly so)--my current party has a Oath of the Watchers Dexadin (aura adds +Proficiency to initiative), 3 other dex-focused classes with at least one of them having Alert. They basically always go first. Low initiative results are around 20 most of the time. Which means they have an entire turn to beat down the enemies before the enemies go. So if their damage output is high, several of the monsters might not even go at all. Or might only go once (because they get killed half-way through round 2 before they've gone again). Which encourages really really high monster damage output, which makes things swingy and devalues a lot of lower-CR monsters.

Thunderous Mojo
2022-08-09, 08:14 AM
my current party has a Oath of the Watchers Dexadin (aura adds +Proficiency to initiative), 3 other dex-focused classes with at least one of them having Alert. They basically always go first. Low initiative results are around 20 most of the time. Which means they have an entire turn to beat down the enemies before the enemies go. So if their damage output is high, several of the monsters might not even go at all. Or might only go once (because they get killed half-way through round 2 before they've gone again). Which encourages really really high monster damage output, which makes things swingy and devalues a lot of lower-CR monsters.

Out of curiosity, was party-wide Initiative spiking something that prior groups never attempted in your games before?

The impact of Initiative spiking is definitely a topic that has been discussed, in many threads in the Playground.

Did those posts not feel relevant, to you, or is it case that now you are experiencing it on a party-wide scale, the impact is inescapable noticeable?

KorvinStarmast
2022-08-09, 08:30 AM
-it's a team game, and damage isn't the only thing. And this is important as regards the rogue getting sneak attack without advantage: it happens a lot in the games I have played since it allows the rogue to 'focus fire' on an already engaged enemy which helps the melee ally move or find another target on their turn.

A few posts up someone mentioned flurry of blows. With this being a team game I find flurry of blows in the great DPR obsession to be a false friend except in rare cases (particularly an enemy concentrating on a spell of similar ability).

I'd rather stun the enemy than FoB them, since if I stun them the whole party gets the benefit of one enemy:

Not attacking them
Not reacting with an opportunity attack
not moving
Not using any other action or reaction or bonus action.
Being attacked with advantage.



Yes, there's a risk it won't work. And that's why it's probably not a good thing to toss into your RED analysis.

If I am just using martial arts (stun comes on line at level 5) I can attack three times and try to stun on any hit.
Yes, need to pick and choose when to spend that ki point and when not to, but the stun that allows my rogue to attack with advantage, our paladin to attack with advantage, the barbarian to attack with advantage without going reckless, our wizard to zip a cantrip with advantage, can whittle down an enemy fast in a nova.

Or, if it's a big bad, it gives the entire party a round to GTFO if we are otherwise out of resources and low on HP. (That has happened twice in games I have played where the combat was swinging badly for the party and a stun cropped up at an opportune time).

While you don't want to fold the stun into RED, I'd not fold flurry of blows in either. One more punch or kick in tiers 1 and 2 isn't as useful as a lot of the other ways to spend a ki point.


Some advantage, sure. That's reasonable. But always? And never sneak attack without advantage? I doubt that. And that's the presumption I was speaking against. In 8 years of play, SA has procced without advantage more than with it, but it's probably not much higher than 50-50 since as we've played the groups have often tried to figure out ways for rogues to get advantage. And for our groups, the only reason it got close to 50-50 is how hard I worked (as a support caster) with my warlock to give our rogue/ranger adv whenever I could (Use web wand, hold person, etc)

Aside: the hide thing TM mentioned is, I have found, a little DM dependent, and nobody has use that Tasha's steady aim in any of the games I have played.

Dork_Forge
2022-08-09, 09:01 AM
A few posts up someone mentioned flurry of blows. With this being a team game I find flurry of blows in the great DPR obsession to be a false friend except in rare cases (particularly an enemy concentrating on a spell of similar ability).

I'd rather stun the enemy than FoB them, since if I stun them the whole party gets the benefit of one enemy:

Not attacking them
Not reacting with an opportunity attack
not moving
Not using any other action or reaction or bonus action.
Being attacked with advantage.



Yes, there's a risk it won't work. And that's why it's probably not a good thing to toss into your RED analysis.

If I am just using martial arts (stun comes on line at level 5) I can attack three times and try to stun on any hit.
Yes, need to pick and choose when to spend that ki point and when not to, but the stun that allows my rogue to attack with advantage, our paladin to attack with advantage, the barbarian to attack with advantage without going reckless, our wizard to zip a cantrip with advantage, can whittle down an enemy fast in a nova.

Or, if it's a big bad, it gives the entire party a round to GTFO if we are otherwise out of resources and low on HP. (That has happened twice in games I have played where the combat was swinging badly for the party and a stun cropped up at an opportune time).

While you don't want to fold the stun into RED, I'd not fold flurry of blows in either. One more punch or kick in tiers 1 and 2 isn't as useful as a lot of the other ways to spend a ki point.

It's worth considering that FoB is not simply an additional attack for a lot of Monks, with many rider effects or benefits attached from subclasses.

I'd also argue Tiers 1 and 2 is where that additional attack is most useful for damage as you're more likely to be fighting enemies that you can finish off or meaningfully injure with it.

KorvinStarmast
2022-08-09, 09:06 AM
Out of curiosity, was party-wide Initiative spiking something that prior groups never attempted in your games before? I've never seen anyone try that yet; we rather stumbled onto it in this group with Phoenix. (Helps that I have a weapon of warning). In the three groups that I play or DM with that include a barbarian at levels 9 and 10, that advantage on initiative (level 7, Feral Instinct) is very handy.

The impact of Initiative spiking is definitely a topic that has been discussed, in many threads in the Playground. Unless you are playing with optimizers, the topic doesn't seem to come up very often.

Did those posts not feel relevant, to you, or is it case that now you are experiencing it on a party-wide scale, the impact is inescapable noticeable? It really depends on the kind of group you are playing in. (Aside: we had a wizard take the Alert feat. He likes going first. He likes fireball too).
Nobody else in that play group (16 different PCs in two parallel campaigns) has taken Alert. They have other stuff that interests them. Three have taken the Lucky feat, though.


I'd also argue Tiers 1 and 2 is where that additional attack is most useful for damage as you're more likely to be fighting enemies that you can finish off or meaningfully injure with it. I won't disagree that in Tier 1, the extra attack here and there is a benefit at 1d4+mod damage. When stunning strike comes on line, I've seen a mixed bag of choices by my players. (I lean hard into stun or other uses of a ki point).
One is big on flurry, and she stayed that way until she successfully completed her first stun followed by attacks at advantage ... it was an eye opener to her. She now splits her ki between the two as often as not. She's Tabaxi, so she has a movement bonus that she uses when she needs it. She's level 9, soon to be 10.

The other one is a wood elf drunken master. He didn't try his first stun until level 6, IIRC, and I asked him "do you want to try and stun this enemy?" ... He likes stun, and he seems to spend his ki on that more often than FoB, but the occasional FoB still happens. Just turned level 10. Having lots more ki points really helps in this regard.

x3n0n
2022-08-09, 09:25 AM
While you don't want to fold the stun into RED, I'd not fold flurry of blows in either. One more punch or kick in tiers 1 and 2 isn't as useful as a lot of the other ways to spend a ki point.

The presets in the tool already include the damage from (staff + martial arts, spending no ki on additional damage) and (staff + spend all of your ki on flurry, depending on adventuring day structure).
Was there something else you were looking for? (That is, what would it mean for the tool to "fold the stun into RED" in a way that you would find useful?)

@PhoenixPhyre: however, that does suggest another possible refactoring. Instead of building advantage into presets, have it be an option on accuracy.
Then you could remove all of the presets that build in advantage, and the user could simulate/estimate the offensive impact of something like stun/restrain/prone/Crusher by setting that flag and choosing the presets that correspond to their party's makeup.

KorvinStarmast
2022-08-09, 09:48 AM
The presets in the tool already include the damage from (staff + martial arts, spending no ki on additional damage) and (staff + spend all of your ki on flurry, depending on adventuring day structure). Oh, if the work's already done, then no worries.

PhoenixPhyre
2022-08-09, 09:53 AM
The presets in the tool already include the damage from (staff + martial arts, spending no ki on additional damage) and (staff + spend all of your ki on flurry, depending on adventuring day structure).
Was there something else you were looking for? (That is, what would it mean for the tool to "fold the stun into RED" in a way that you would find useful?)

@PhoenixPhyre: however, that does suggest another possible refactoring. Instead of building advantage into presets, have it be an option on accuracy.
Then you could remove all of the presets that build in advantage, and the user could simulate/estimate the offensive impact of something like stun/restrain/prone/Crusher by setting that flag and choosing the presets that correspond to their party's makeup.

That's a direction I'm going in--a bunch of behind-the-scenes refactoring needs to happen to make that possible though. It had been being handled on a case-by-case basis.

I think I'm going to make it a separate switch from accuracy itself though, because I don't want to bloat that dropdown much more with all the possible combinations.


Oh, if the work's already done, then no worries.

One I do need to add is "monk with [quarterstaff|unarmed] doing something else with his bonus action" (ie just the attack action, no Martial Arts attack). Currently they're all doing 3 attacks or more.

Thunderous Mojo
2022-08-09, 10:56 AM
Unless you are playing with optimizers, the topic doesn't seem to come up very often. .

This made me chuckle because an asynchronous conversation was had on the Evil Oni Discord server about Initiative spiking…which was precipitated by you talking about Watcher Paladins.

ff7hero used Hex to negatively impact a foe in our game, and conversations were had regarding using Enhance Ability, for Initiative purposes, as part of our adventuring preparations.

(Bards have a built in Initiative boost, which serves them quite well in T4.)

As Dork_Forge, convinced me of during that conversation, Initiative being a Dexterity check in 5e, isn’t a secret.

Indeed it is stated, in two locations.

‘Optimizers’ is an empty label. An ‘Optimized’ character is optimized for something. One can have a character optimized for Role Playing.

Perhaps, I am misinterpreting, but is your use of the term ‘optimizer’ essentially meaning: ‘playstyle, I don’t like’.

If optimizer means someone that knows the rules, and tries to be effective, then you also fall into that category , Korvin. 🍻😉

(Which explains why, I am unsure of what the term is meant to signify)

x3n0n
2022-08-09, 11:18 AM
I'm hoping we can redirect the mainline conversation in this thread to insights about RED as an analytical concept and/or PhoenixPhyre's damage calculator tool.


‘Optimizers’ is an empty label. [...] (Which explains why, I am unsure of what the term is meant to signify)


Agreed that "optimizer" is typically a statement about the speaker's values, not a specification.

Agreed also that emphasizing initiative is often very effective, especially when combined with "nova" or hard-control potential of either allies or enemies.
Its relative importance is decreased the longer that a combat encounter remains "competitive".

The insight I was trying to get to in the earlier post is that encounters dominated by super-durable (and resistant) enemies with moderate damage output are less affected by initiative, and the reverse for encounters with fragile-but-explosive enemies.

Thunderous Mojo
2022-08-09, 11:22 AM
I'm hoping we can redirect the mainline conversation in this thread to insights about RED as an analytical concept and/or PhoenixPhyre's damage calculator tool.

My apologies, by no means do I wish to distract.

Dork_Forge
2022-08-09, 11:27 AM
This made me chuckle because an asynchronous conversation was had on the Evil Oni Discord server about Initiative spiking…which was precipitated by you talking about Watcher Paladins.

ff7hero used Hex to negatively impact a foe in our game, and conversations were had regarding using Enhance Ability, for Initiative purposes, as part of our adventuring preparations.

(Bards have a built in Initiative boost, which serves them quite well in T4.)

As Dork_Forge, convinced me of during that conversation, Initiative being a Dexterity check in 5e, isn’t a secret.

Indeed it is stated, in two locations.

‘Optimizers’ is an empty label. An ‘Optimized’ character is optimized for something. One can have a character optimized for Role Playing.

Perhaps, I am misinterpreting, but is your use of the term ‘optimizer’ essentially meaning: ‘playstyle, I don’t like’.

If optimizer means someone that knows the rules, and tries to be effective, then you also fall into that category , Korvin. 🍻😉

(Which explains why, I am unsure of what the term is meant to signify)

On the topic of initiative boosting (I personally don't find spiking appropriate for anything outside extreme cases) it's extremely common in the game and you can have a party that does it entirely by accident:

Bard: Jack of All Trades, Bardic Inspiration (others)

Champion Fighter: Remarkable Athlete

Swashbuckler Rogue: Rakish Audacity

Gloomstalker Ranger: Dread Ambusher

War Wizard and Chronurgist: Tactical Wit and Temporal Awareness

Twilight Cleric: Vigilant Blessing

Watchers Paladin: Aura of the Sentinel

And then there's Alert, various spells etc.

So not necessarily the realm purely of higher end optimisation.

KorvinStarmast
2022-08-09, 01:06 PM
Perhaps, I am misinterpreting, but is your use of the term ‘optimizer’ essentially meaning: ‘playstyle, I don’t like’. It means, in that sentence, a small subset of players. It is certainly a category of play and chargen that I delve into. In the initial Oni campaign I tried to make that Arcana Cleric work and get the most out of him. I was enjoying that.

If optimizer means someone that knows the rules, and tries to be effective, then you also fall into that category, Korvin. 🍻😉 For sure.
My bard had a boost to initiative through level 20 from Jack of All Trades.
I never used the 14th level ability to boost my own initiative, although a couple of times I should have given how two of the other players were really bad about just charging in without thinking, or ignoring any planning we may have done.
I mostly used BI for Cutting Words or a standard BI for our two melee fighters.

I used Enhance Ability on a rogue/assassin (some years back) to help him get an initiative boost when we were trying to ambush an enemy in the Underdark. That is the one time I can remember doing that.
I suppose that if a Sorcerer twinned Enhance Ability on two allies they'd both get that boost to the roll. But that presumes a Sorcerer had picked that spell.

But we are drifting from the thread topic, sorry about that.