Originally Posted by
NichG
That kind of difference only really matters in cases where varying the number of characters that are available to participate is something that you can make active decisions about. In this scenario, the ship has the crew it has, and if this skill challenge is worth running then the numbers should be in an interval where failure is actually possible. So in that case, the thing you describe (-2 each round, p(success) of +1 per person) is going to have a fixed number of people, which means that there is some set of DCs and +'s/-'s that would produce the same distribution of success/failure as well as challenge length as your system.
So adding that kind of shift doesn't really resolve the thing where this skill challenge went on and on without really concluding.
I think the real design error in this sort of track-based skill challenge framework is that basically once you have the set of participants and the parameters of the challenge (# of successes needed/# of failures allowed/etc), the outcome or distribution of outcomes is pretty much fixed (barring intentional bad choices on the part of the players, like using a skill where the expected result will be worse than doing nothing for example). So you're taking however long it takes in order to do what could be summarized by a bit of math and rolling a percentile check. In order to make it worthwhile to play through, intermediate results should lead to cases where new decisions have to be made. To make it really meaningful, those decisions should not just be a matter of optimizing a mathematical function, but should involve some degree of prioritization or the possibility of contrast between personal goals of the participants (so that the players don't just need to decide how to get what they want, but they have to decide how to navigate the compromise between what they want, what they can have, what the risk is, and what others might want).
Otherwise, it risks feeling like you're watching the DM take an hour just to roll a die.