Beware of false precision. The idea that you need to be able to precisely calculate the probability of success (especially when degrees of success are involved) to have agency/make meaningful decisions is, to me, very strange. Because in real life, where we seem to make meaningful decisions all the time[1], we not only don't know the exact probability of success of anything we do, there are reasons to believe that we can't know that probability beyond the very most high-level "X is more difficult than Y" sense. No, not even for simple operations you do all the time.

In fact, assigning probabilities to events (in the frequentist sense) can really only be done successfully for things without agency. You can define the probabilities of two subatomic particles interacting in particular ways (my field of research as a PhD student) with quite rigorous precision. You can't assign probabilities for two people interacting in particular ways with any kind of precision whatsoever. In fact, it's difficult if not impossible to chart out even a substantial section of the possible interaction pathways for any two arbitrary people. You can do slightly better for large groups of people interacting, but only slightly. Hari Seldon's psychohistory is not a real thing.

So in a game, all that's really needed is "Task A is harder than Task B" with "you can do that without enough risk to worry about[2]" or "that task is impossible for you" as third options. The whole bell curve/fine-grained detail thing? Yeah, it may be nice math, but it's not really modeling anything fundamental, especially at the scale of most campaigns.

[1] not getting into the whole free will debate here.
[2] which is not no risk, there's always risk. But the risk is low enough or not interesting enough that we're going to ignore it for game purposes.