New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Results 1 to 12 of 12
  1. - Top - End - #1
    Ogre in the Playground
    Join Date
    Aug 2012
    Location
    Vacation in Nyalotha

    Default Monte Carlo vs full distribution for survivability benchmarking

    Assumptions:
    The intent is to analyze a defender’s durability when faced with a known opponent. The derived results will be used to frame suggested defense benchmarks for players to consider in building their characters.
    Durability as we are concerned with is how many swings it takes the attacker to render the defender unconscious or dying.
    The system uses granular, dice roll generated results such that it is possible to known the full distribution of all states up to any given swing count.
    The trials involve no input once begun.

    What merit is there to performing Monte Carlo simulations on the above described system rather than just working off the explicit probability distribution?

    How much does this change if we provide the defender one reroll they can spend whenever? Or does sufficient computational power also allow an exhaustive analysis in this case?

    The dice system in question is 4e shadowrun with the one simplification of armor values being a flat average reduction.

    Spoiler: Dice explanation
    Show

    Xd6>=5. Roll X d6 and count all 5s and 6s. Both defender and attacker have a quantity they roll. If the attacker has more successes they hit, dealing POWER-ARMOR+ net successes.


    Am I missing something critical here?
    If all rules are suggestions what happens when I pass the save?

  2. - Top - End - #2
    Colossus in the Playground
     
    Segev's Avatar

    Join Date
    Jan 2006
    Location

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    The primary purpose of Monte Carlo simulations is to get a generous covering of a state space that is too expensive to exhaustively search. The secondary purpose is to avoid accidentally missing periodic or otherwise common cases by your choice of regular distribution grid happening to "miss" them. A Monte Carlo simulation's randomness ensures no pattern to your test grid and thus avoids any pattern-based blind spots. (Imagine a test grid that checked for the color patterns of a flat surface. If it is a regular grid of 4 x 4 points in the coincidentally correct configuration, it could read a chess board as being solidly one color. A Monte Carlo sampling of 16 random points on a chess board is significantly less likely to get only one color result.)

  3. - Top - End - #3
    Titan in the Playground
     
    Daemon

    Join Date
    May 2016
    Location
    Corvallis, OR
    Gender
    Male

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    One advantage to Monte Carlo simulations is that they give some evidence of how fast the distribution converges to the large-number limit. Since there are parameters other than just the explicit distribution, this convergence (IMX) tends to be quite slow. So that the large-number limit (replacing dice rolls with their means, for example) starts making sense after thousands of iterations. Which, for any real game scenario, is unrealistic. Because you tend to only go a few dozen rounds against enemies with identical parameters, if that. So the variance (which the MC simulation catches) makes much more of a difference than it would otherwise seem.
    Dawn of Hope: a 5e setting. http://wiki.admiralbenbo.org
    Rogue Equivalent Damage calculator, now prettier and more configurable!
    5e Monster Data Sheet--vital statistics for all 693 MM, Volo's, and now MToF monsters: Updated!
    NIH system 5e fork, very much WIP. Base github repo.
    NIH System PDF Up to date main-branch build version.

  4. - Top - End - #4
    Firbolg in the Playground
    Join Date
    Oct 2011

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    DR 5 is similar to DR d10 in the aggregate… unless you're being attacked for d4 damage.

    If you are comparing the full probability distribution of each side to map the probability of possible results, then you don't need random polling of results *except* as a good way to verify that your results actually match what random chance says are possible results.

  5. - Top - End - #5
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    The major advantage is that it takes like 10 minutes to write code that does the Monte Carlo. Since the problem is computationally trivial, evaluating the MC is also going to be very fast.

    I think it's possible to get a closed form for the probability of the defender being alive on round x, but it's liable to take a lot longer to derive.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  6. - Top - End - #6
    Firbolg in the Playground
    Join Date
    Oct 2011

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    Quote Originally Posted by warty goblin View Post
    The major advantage is that it takes like 10 minutes to write code that does the Monte Carlo. Since the problem is computationally trivial, evaluating the MC is also going to be very fast.

    I think it's possible to get a closed form for the probability of the defender being alive on round x, but it's liable to take a lot longer to derive.
    Although it's not as clean as I would like, one could quickly hammer out a probabilistic answer to "alive after X rounds" by building the probability tree, and summing the odds.

    For a simple example, suppose you wanted to know the odds of getting at least 3 heads on coin flips by X flips.

    After 3 flips, you would have

    1 3 3 {1}

    And it would return 1/8.

    After 4 flips,

    1 4 6 {4 1} = 5/16

    5 flips

    1 5 10 {10 5 1} = 16/32

    6 flips

    1 6 15 {20 15 6 1} = 42/64

    Etc etc

  7. - Top - End - #7
    Ogre in the Playground
    Join Date
    Aug 2012
    Location
    Vacation in Nyalotha

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    Quote Originally Posted by warty goblin View Post
    The major advantage is that it takes like 10 minutes to write code that does the Monte Carlo. Since the problem is computationally trivial, evaluating the MC is also going to be very fast.

    I think it's possible to get a closed form for the probability of the defender being alive on round x, but it's liable to take a lot longer to derive.
    Well I’ve already done both (closed form is also an O(n) operation) The topic of the MC only came up when someone in the relevant balance thread insisted on using a MC for vague reasons. We were having some communication issues and misunderstandings so I was left wondering if something crucial hadn’t been conveyed.


    Thanks everyone for clearing this up.
    If all rules are suggestions what happens when I pass the save?

  8. - Top - End - #8
    Barbarian in the Playground
     
    RedWizardGuy

    Join Date
    Jul 2020

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    Usually, in any simulation, it is worth it to simulate across an array of conditions. This lets you display how much each small change impacts outcomes.

    So, if there is a set of conditions, simulate (separately) for each. There is a limit to how many you can present (and if you allow conditions to interact, you'll get a 2x2 or 3x3 or nxn table of simulations to present.)

  9. - Top - End - #9
    Ogre in the Playground
    Join Date
    Aug 2012
    Location
    Vacation in Nyalotha

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    Quote Originally Posted by cutlery View Post
    Usually, in any simulation, it is worth it to simulate across an array of conditions. This lets you display how much each small change impacts outcomes.

    So, if there is a set of conditions, simulate (separately) for each. There is a limit to how many you can present (and if you allow conditions to interact, you'll get a 2x2 or 3x3 or nxn table of simulations to present.)
    That’s precisely my intent! GM gets a handy list of appropriate baseline threats (that I fully expect to blunder the way of any CR system), players get guidance amounting to “if you hit XYZ benchmark you shouldn’t faceplant instantly against mooks assuming you have one reroll on hand, for the entire campaign” along with some other relevant guiding statistics.

    My two current regions of interest are the 1st and 50th percentiles for assessing the risks of sudden KOs, and median performance for the bounds on what threat:duration constitutes a fight with no underdog. As the system may not frequently present mosh pits that subject a character to 20 attacks, or even 10 over the course of a single fight I’m wondering just what sort of parameters should go into a model that accounts for periodic recovery...

    But I’m getting sidetracked. Main questions have been answered wonderfully.
    If all rules are suggestions what happens when I pass the save?

  10. - Top - End - #10
    Ettin in the Playground
     
    BardGuy

    Join Date
    Jan 2009

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    I'd think you could run this over the entire probability distribution, without taking too exhaustive a computing power or time. I did something like this in R for a class project for D&D 5e, doing both Monte Carlo simulations and calculating the average probability (which is just a step away from making the actual probability distrubtion--actually, I guess I did make the actual distribution, but just averaged it instead of mapping that, since doing the Monte Carlo was part of the class project requirement.)
    And it worked -- albeit sometimes taking a couple minutes -- even for large dice pools and with advantage/disadvantage (in the 5e D&D sense).

    It would probably double (at least) the computational time to compare it across two dice pools (e.g., the attacker and defender), but seems reasonable you could map the entire distribution of say, "I have 5d6" against "enemies who have 1d6 to 10d6". If it takes long, make and store the enemy dice pools (and I guess maybe the PCs potential ones?) as saved datasets/files and have your program pull the appropriate ones. It would probably be faster the load the pre-generated distribution rather than calculate it on the fly.
    Also, the fact you're using d6s and not higher means less numbers in the array/list/whatever, which matters if your final counts increase exponentially due to something.

    Not sure what language you're using, but if you're using R and would like something for comparative purposes, I posted a version of the code here: https://forums.giantitp.com/showthre...olling-Program

  11. - Top - End - #11
    Ogre in the Playground
    Join Date
    Aug 2012
    Location
    Vacation in Nyalotha

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    Quote Originally Posted by JeenLeen View Post
    I'd think you could run this over the entire probability distribution, without taking too exhaustive a computing power or time. I did something like this in R for a class project for D&D 5e, doing both Monte Carlo simulations and calculating the average probability (which is just a step away from making the actual probability distrubtion--actually, I guess I did make the actual distribution, but just averaged it instead of mapping that, since doing the Monte Carlo was part of the class project requirement.)
    And it worked -- albeit sometimes taking a couple minutes -- even for large dice pools and with advantage/disadvantage (in the 5e D&D sense).

    It would probably double (at least) the computational time to compare it across two dice pools (e.g., the attacker and defender), but seems reasonable you could map the entire distribution of say, "I have 5d6" against "enemies who have 1d6 to 10d6". If it takes long, make and store the enemy dice pools (and I guess maybe the PCs potential ones?) as saved datasets/files and have your program pull the appropriate ones. It would probably be faster the load the pre-generated distribution rather than calculate it on the fly.
    Also, the fact you're using d6s and not higher means less numbers in the array/list/whatever, which matters if your final counts increase exponentially due to something.

    Not sure what language you're using, but if you're using R and would like something for comparative purposes, I posted a version of the code here: https://forums.giantitp.com/showthre...olling-Program
    I already have the whole thing built. Pre built roll tables were a given since it boils down to what is essentially jury rigged multiplication of fixed sized matrixes. I haven’t bothered timing it since a single distribution up to N=50 is less than a second and generally gets >99% of the distribution.
    If all rules are suggestions what happens when I pass the save?

  12. - Top - End - #12
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: Monte Carlo vs full distribution for survivability benchmarking

    Quote Originally Posted by Quertus View Post
    Although it's not as clean as I would like, one could quickly hammer out a probabilistic answer to "alive after X rounds" by building the probability tree, and summing the odds.

    For a simple example, suppose you wanted to know the odds of getting at least 3 heads on coin flips by X flips.
    Probability trees are certainly a way, but they're inefficient here, since the rolls are independent. A faster approach is as follows:

    We want to know the probability of survival after N rolls. Let A_i be the ith attack roll, i = 1,..., N, with F_i the ith defense roll. Then we can define D_i as the number of successful attacks on roll i, D_i = max{0, A_i - F_i}. So far so much notation.

    Suppose attack rolls are made with n_A dice, and defense with n_F dice. Then A_i and F_i are independent binomial distributions, with probability of success p = 2/3, and it's pretty easy to show that for 0 < z <= n_A

    P(A_i - F_i = z) = sum_{j = z}^{n_A} P(A_i = j)P(F_i = j - z)

    where P(F_i = x) = 0, for x > n_F. We can derive a similar expression for z < 0, but there's no point here, because we only care about D_i, we already have the probability of all nonzero values of D_i, so

    P(D_i = 0) = 1 - sum_{z = 1}^n_A P(A_i - F_i = z)


    So now we know the probability of doing d damage, for d = 0, ...., n_A. Now note that P(alive after N rolls) = P(sum_i D_i < h) for some fixed amount of health h (this is the survival function, not the hazard P(dies at i | alive at i - 1)). But the D_i are independent across rolls i, so sum D_i is just a multinomial in N trials with outcome probabilities given above. So to simulate from the survival probability for roll N, we just need to simulate a multinomial, and sum up the components (playing a little fast and loose with the definition here, since technically a multinomial is just how many of each result there are, not the value of that result, but you get the point and it isn't worth defining the notation to deal with it in absolute formality).
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •