PDA

View Full Version : Science Approximating a diagonal slope through 9 degree steps. Why does it not work?



Yora
2020-11-30, 08:17 AM
I know this does not work. And it's obvious it doesn't work. But it still feels like it should.

Say you have a line with a 45 degree slope on a coordinate system, going from (0,0) to (100,100), and you want to know the length of that path.

Then you draw a second line that starts at (0,0) and then goes 100 units to the right and then 100 units up, ending at (100,100). 100 units + 100 units is 200 units.

Draw a third path that goes 50 units right, 50 units up, 50 units right, and 50 units up. Again, you end up at (100,100), and 50+50+50+50 is again 200.

You can increase the number of steps to infinity, getting an infinite number of infinitesimally small units, and it will still always result in a total length of 200.

When the steps become infinitesimally small, the "stairs" should approach to being a straight line. but its length stays fixed at 200 and does not approach the actual length of a straight line, which is 1.4142.

The method of splitting the area under a curve into infinitesimally narrow rectangles to calculate the size of the area works. But it doesn't work to calculate the length of the curve. And this somehow feels surprising. (Though it's totally obvious that it doesn't.)

So I guess my actual question is: Why does my intuition tell me that infinitesimally small steps should approach a straight line when it doesn't?

Lord Torath
2020-11-30, 09:13 AM
This is something you need to work out with your intuition. :smallwink:

I can tell you why the 'steps' approximation doesn't work, but I can't tell you why your intuition tells you it should.

I'd read about this approximation when I was... 7 or 8, I think, and even then I knew it didn't work. I wanted it to work (I wanted a nice, even ratio between the lengths of the side and the length of the diagonal - a rational number well before I learned what rational numbers were), but I could tell by using a ruler that the diagonal across my Lost Treasure diving grid was much less than the sum of the vertical and horizontal distances.

It could be the fact that most approximations get better as you use smaller and smaller pieces to fit the curve. But this one doesn't get any better as you break it into smaller pieces, because it's not actually changing anything.

DavidSh
2020-11-30, 09:43 AM
You can, in fact, define distances between points that way. It's sometimes called the "Taxicab metric", for its relevance to cities with a rectilinear street grid. However, it fails one of the axioms commonly adopted for geometry, that two triangles ABC and DEF with equal angles at A and D, and equal pairs of lengths AB, DE and AC, DF, should always have equal angles at B and E, and at C and F, and so be congruent. That failing, critical parts of the proof of the Pythagorean Theorem no longer work.

(Try, for example, comparing the triangle in the plane with vertices (0,0), (1,0), (1,1) with the triangle with vertices (0,0), (0.5,0.5), (0,2).)

More intuitively, using the taxicab metric you can no longer rotate your figures by arbitrary angles and keep both the lengths of your sides and the angles between them constant. So it's not a useful metric for measuring the dimensions of things like pieces of wood, even though it might be useful for planning travel in a city.

Yora
2020-11-30, 10:01 AM
Yeah, I was thinking about taxicab metrics when playing a grid based city builder game, and remembered that "diagonals" on such geometry are pointless. Which is where I started wondering why.

But this also is related to the coastline problem. And in that context, getting more and more granular with increasingly smaller units does get you gradually closer to the actual length of the outline of your shape. Why does increased granularity approach closer towards the real outline length of an island, but not the outline of a triangle under a 45 degree line on a graph?

Fyraltari
2020-11-30, 10:43 AM
My guess is that it's simply that the human mind isn't esuipped to handle infinites and therefore does a shoddy job at it. We understand that that an infinite sum of infinitely small terms can add up to something finite (Xeno's paradox) and since your thought experiments create something that looks like a line we assume thz result is a line. And we're wrong.

DavidSh
2020-11-30, 10:48 AM
I think part of the distinction is between a piecewise linear approximation to a curve where the corners of the approximation are on the target curve, such in the coastline case, and an approximation where the corners are not always on the target curve. You can approximate a line with a series of zigzags such that the zigzags converge pointwise to the line while the lengths of the zigzags increase without limit.

factotum
2020-11-30, 11:29 AM
But this also is related to the coastline problem. And in that context, getting more and more granular with increasingly smaller units does get you gradually closer to the actual length of the outline of your shape.

I don't think it does? The whole point of the coastline problem is that, as you make your measurements increasingly granular, the length just keeps increasing, at least up until you start using rulers that are Planck length in size!

Tvtyrant
2020-11-30, 12:42 PM
Yeah, I was thinking about taxicab metrics when playing a grid based city builder game, and remembered that "diagonals" on such geometry are pointless. Which is where I started wondering why.

But this also is related to the coastline problem. And in that context, getting more and more granular with increasingly smaller units does get you gradually closer to the actual length of the outline of your shape. Why does increased granularity approach closer towards the real outline length of an island, but not the outline of a triangle under a 45 degree line on a graph?

I believe you have that backwards. The coastline effect is that there is no line at any distance, it is jagged all the way up and down. This is useful as a metaphor for other systems where all attempts to plot them are enforced over a ragged effect, such as economics.

Battleship789
2020-11-30, 03:22 PM
Yeah, I was thinking about taxicab metrics when playing a grid based city builder game, and remembered that "diagonals" on such geometry are pointless. Which is where I started wondering why.

But this also is related to the coastline problem. And in that context, getting more and more granular with increasingly smaller units does get you gradually closer to the actual length of the outline of your shape. Why does increased granularity approach closer towards the real outline length of an island, but not the outline of a triangle under a 45 degree line on a graph?

As has already been noted, the coastline problem has the result that increasing the granularity of the measurements increases the length of the coastline (ad infinitum). This is because one can think of coastlines as fractals, and 1-D fractals have infinite lengths confined to a finite area (Koch snowflakes, for example).

sktarq
2020-11-30, 05:55 PM
for the intuition front.

we have a basic "logic"/heuristic of that as things appear more similar across a set (and especially if they are of the same/similar type...eg "a line") they should also behave in a more similar fashion.

honestly it normally works.

Radar
2020-11-30, 06:36 PM
So I guess my actual question is: Why does my intuition tell me that infinitesimally small steps should approach a straight line when it doesn't?
I cannot speak for your intuition, but the reason for this iterative approximation failings lies in the definition of curves being close and that it has nothing to do with the lengths of the respective curves. To make things simpler, let as take a horizontal line y = 0 of unitary length and for example a set of approximations yn = sin(pi n^2 x)/n. So what is the distance between y and yn? It is a simple integral

sqrt(int01 yn2dx)

Standard Euclidean definition: sum of squares and then take a square root. This gives us distance of 1/(2 n^2)

Now if you want to calculate the length, You need to look at the derivative instead of the value itself.

int01 sqrt(1+y'n2)dx

This gives us not exactly a simple answer, but for large n it is roughly equal to 2n.

Functions are behaving like that because they are a representation of an infinite-dimensional space and, as always, infinities break something.

Peelee
2020-12-01, 06:45 PM
Heard this somewhere, not mine:


Let's assume that the curves we're talking about (and here straight line is a curve) are graphs of functions. It happens that the arc length of a curve is the property of the derivative, not of the function itself. That is, differential calculus is involved.

It happens that the approximations by staircase like jagged lines form something that we say converges to the original curve, but it converges in a too rough manner.

Because of this problem with the convergence it will be the case that the derivatives (the differentials) of the approximating curves do not converge to the derivative (differential) of the original curve.

Well, if arc length is the property of a derivative, this means that there is no guarantee that we can approximate arc length this way.

For a similar reason you can try, and fail, a very famous approximation that seems to prove that pi = 4.

There it is, in short. For an appetizer.

Sean Mirrsen
2020-12-03, 02:48 AM
For why this feels like it should intuitively work when it doesn't - it's because intuition sometimes fails to notice fundamental flaws in the premise. Like the fact that horizontal and vertical components of 90 degree steps don't add up to their diagonal component.

It also feels like it should work because on some level it does work - a complex curve will somewhat approach its true length as it is broken up into smaller steps. An S-curve, for instance. The more complex it is, the more bends and turns it has, the further you'll come to its actual length by breaking it up. It's essentially the inverse coastline problem - the coastline problem deals with an infinitely complex curve and a coarse measuring method, where measured length increases as measuring precision increases, approaching but never reaching the curve's true length; and here you have an infinitely coarse measuring method and a simple curve, where measured length increases as curve complexity increases, again approaching but never reaching the curve's true length.

Breaking a curve up into steps can work, but it has to be done differently. You don't count the "true length" of a curve by summing up the steps, you just count the steps and approximate the average step length. If you think of the "steps" as pixels on a bitmap - as you increase the render resolution, you come closer to seeing the true curve, but because you're always looking at a grid of pixels you can only ever know the size of the curve in terms of pixels, and can never see the actual dimensions of the curve unless you know the dimensions of a pixel.

Rydiro
2020-12-03, 06:22 AM
As has already been noted, the coastline problem has the result that increasing the granularity of the measurements increases the length of the coastline (ad infinitum). This is because one can think of coastlines as fractals, and 1-D fractals have infinite lengths confined to a finite area (Koch snowflakes, for example).
Those fractals usually arent 1 dimensional. They usually have a dimension that is a fraction between 1 and 2. I think thats why they are called fractals.
At the OP. You need to establish a solid approximation theory to curves. Then you will know why your approach fails.
There should be one, but i actually havent studied manifolds/analysis enough to point you in zhe right direction.

Guess: Your approximation needs to converge in the curvature too. Since the steps have infinite curvature at the corners, this doesn't hold.

EDIT: Thinking about it in simpler terms, its because the derivatives do not converge. The length of your curve as a function is integral(sqrt(f'˛+1)). For the approximatins to converge, they need to converge in the derivatives wrt the appropriate norm.
Your approximation keeps a constant distance to your curve in the derivative function space.

Radar
2020-12-03, 09:31 AM
Those fractals usually arent 1 dimensional. They usually have a dimension that is a fraction between 1 and 2. I think thats why they are called fractals.
At the OP. You need to establish a solid approximation theory to curves. Then you will know why your approach fails.
There should be one, but i actually havent studied manifolds/analysis enough to point you in zhe right direction.

Guess: Your approximation needs to converge in the curvature too. Since the steps have infinite curvature at the corners, this doesn't hold.

EDIT: Thinking about it in simpler terms, its because the derivatives do not converge. The length of your curve as a function is integral(sqrt(f'˛+1)). For the approximatins to converge, they need to converge in the derivatives wrt the appropriate norm.
Your approximation keeps a constant distance to your curve in the derivative function space.
Yup! It is indeed about the derivative. I gave an example of an approximation using smooth functions that still does not have the same length as the original line. The thing is, without constraining possible approximations of a given curve, you will most likely always miss some qualities. Once we fix the shape and length we can for example ask about the mean curvature, which will force us to deal with the second derivative and so on.

One of the reasonable solutions is a piecewise interpolation using polynomials of an order appropriate to the number of relevant qualities that you need to keep track of. If you want a complete convergence in all derivatives, then there are things like Fourier expansion and other similar solutions, which for smooth curves would allow you to build good approximation in arbitrarily high order derivatives providing that you include enough elements of the series.

Glorthindel
2020-12-03, 11:44 AM
Its similar to a problem I often encounter in my profession. When factoring the thermal properties of a ground floor, you need to refer to its P/A factor (Perimeter / Area). The higher P/A factor, the more insulation you need. For the lowest P/A, you need minimal perimeter, maximum area, and a square is pretty much the best way to achieve this. The more kinks, outcroppings, and other features you add to the edges of the floor plan, the more the perimeter shoots up, while the area might as likely be decreasing as much as it might be increasing. Intuition says that increasing the floor area must always be improving the situation, but if the ratio of perimeter to area is going the wrong way, its still making things worse. Quite simply, while larger area is better in isolation, its never in isolation in this calculation, so you always need to pay attention to what the change to one element is doing to the other.

The same is happening to your jagged-sided triangle. As soon as you break it into right-angle steps, the length of the perimeter of the triangle has just shot up, and no matter how many smaller segments it breaks down to, the perimeter is not changing. The area, if you were paying attention to that figure, will be getting nearer and nearer to the area of your original flat-sided triangle the smaller your segments get, but its doing nothing to the perimeter.

georgie_leech
2020-12-03, 01:04 PM
Oh hey, I actually remember an old Vihart video (https://www.youtube.com/watch?v=D2xYjiL8yyE) on this very subject.

Radar
2020-12-03, 02:40 PM
Oh hey, I actually remember an old Vihart video (https://www.youtube.com/watch?v=D2xYjiL8yyE) on this very subject.
This is actually a pretty cool video!

Battleship789
2020-12-04, 12:59 AM
Those fractals usually arent 1 dimensional. They usually have a dimension that is a fraction between 1 and 2. I think thats why they are called fractals.

*snip*

Topologically, the snowflake is 1D, though you are correct that it has a fractal dimension different from this (it's fractal dimension is D, where 3^D = 4.) The fact that its fractal dimension is different from its topological dimension is why its called a fractal, as you correctly recalled.

Jay R
2020-12-05, 04:11 PM
Because you are not actually changing the kind of path you are making. You are continuing to walk a 45 degrees off the path you want And whether you take one turn or a thousand, sin (45) will remain 0.7071.

Your brain wants to believe you're going from walking at 45 degrees off your course, to 22.5 degrees off, to 11.25 degrees off, etc. If you did this, each step would reduce the length of the path. But you are not in fact changing how far off the right direction you are going; you are simply making the exact same error in more smaller pieces.

Rydiro
2020-12-07, 07:19 AM
Topologically, the snowflake is 1D, though you are correct that it has a fractal dimension different from this (it's fractal dimension is D, where 3^D = 4.) The fact that its fractal dimension is different from its topological dimension is why its called a fractal, as you correctly recalled.Topological is probably not the right way to look at dimensions, because it usually simplifies a lot of things.
I guess by topological, you mean a continuous mapping of the interval to the snowflake as embedded into the plane.

Battleship789
2020-12-07, 11:53 PM
Topological is probably not the right way to look at dimensions, because it usually simplifies a lot of things.
I guess by topological, you mean a continuous mapping of the interval to the snowflake as embedded into the plane.

That is a fair point in this instance, though I was only speaking to the fact that (most) 1D topological fractals have infinite perimeters confined to a finite area (when embedded in the plane). Typically, topological dimension is the default assumption when speaking about "dimensions", though it of course depends on context (which I should've taken into account when responding.)

That is a way to tell if two spaces are homeomorphic (i.e., they share all of their topological properties, including dimension), though it's important to note that the mapping must have a continuous inverse as well in order to be a homeomorphism.

Rydiro
2020-12-10, 10:02 AM
That is a fair point in this instance, though I was only speaking to the fact that (most) 1D topological fractals have infinite perimeters confined to a finite area (when embedded in the plane). Typically, topological dimension is the default assumption when speaking about "dimensions", though it of course depends on context (which I should've taken into account when responding.)

That is a way to tell if two spaces are homeomorphic (i.e., they share all of their topological properties, including dimension), though it's important to note that the mapping must have a continuous inverse as well in order to be a homeomorphism.Honestly, I never heared about a topological dimension (just about things being homeomorphic instead). And I heared two semesters in topology.

gomipile
2020-12-10, 03:23 PM
Honestly, I never heared about a topological dimension (just about things being homeomorphic instead). And I heared two semesters in topology.

I think what is meant is something like "is a 1 dimensional topological manifold."

Battleship789
2020-12-11, 02:29 AM
I think what is meant is something like "is a 1 dimensional topological manifold."

Not quite, as a topological space can have a dimension but not be a manifold. For instance, a self-intersecting curve isn't a manifold but has topological dimension 1.

You might be more familiar with the name Lebesgue covering dimension.

Bohandas
2020-12-11, 02:48 AM
I always assumed that this was one of those things that shifted at infinity. Like the area of the sierpinski curve, which after any finite number of iterations is equal to zero, but after infinite iterations becomes equal to the area of the space the planar section the curve is contained in.

EDIT:
Like shouldn't the risers and ledges reduce to a sequence of aligned points?

gomipile
2020-12-11, 03:23 AM
Not quite, as a topological space can have a dimension but not be a manifold. For instance, a self-intersecting curve isn't a manifold but has topological dimension 1.

You might be more familiar with the name Lebesgue covering dimension.

Fair. I'm pretty sure I've seen the Lebesgue covering dimension referred to as just the topological dimension before, possibly semi-informally.

Zombimode
2020-12-11, 03:35 PM
So I guess my actual question is: Why does my intuition tell me that infinitesimally small steps should approach a straight line when it doesn't?

Maybe you're subconsciously thinking of the lines having a thickness. And that is undersstandable: if you actually draw the line, it does have a thickness.

Ibrinar
2020-12-11, 07:23 PM
I think part of the intuition being wrong is the assumption that an approximation should get closer in all regards and this specifically avoids changing the length. Plus alternating pointlessly between purely up/down/left/right is just fairly unlike how we would normally connect something when there are no obstacles.

Mister Tom
2020-12-12, 06:48 AM
Interesting question!

The point I think which ties these two together is that your intuition needs to think about what happens if the minds eye is to zoom in on part of the approximation it's building itself. Zoom in on the taxicab sawtooth triangles enough and, lo and behold, triangles again. This "self similarity" applies.on the same way to the _mathematical_ coastline: while the measured perimeter dooth verily increase, if you zoom on enough you're looking at the original problem in microcosm- I.e. infinitely complex detail approximated by a few straight lines. ( what an actual coastline looks like I couldn't tell you.)

BY WAY of contrast, your intuition does work of you're approximating a circle's perimeter like Archimedes did- because as any flat earther will tell you, if you zoom in on a circle's edge then the apparent curvature disappears to zero.

Radar
2020-12-12, 07:52 AM
I think I understand the problem a bit better then before.

Aside from proofs that not every iterative approximation works I think it would be nice to see, which ones do and which ones do not. The simplest method to distinguish good and bad approximations is to look at the derivative of a given function.

All bad approximations have one thing in common: the derivative does not have a proper limit at all. For example, if we consider a right-angled zig-zag function and how well it resembles a straight, horizontal line, the derivative of the zig-zag would be either 1 or -1 depending on the particular place, while for the horizontal line it will obviously be 0 everywhere. If we make the steps in the zig-zag smaller, the derivative will still be either 1 or -1, but the intervals are getting shorter. In the limit of infinitely small zig-zag steps we cannot tell anymore whether the derivative should be 1 or -1 at any given point. Derivative becomes undefined.

All the good approximations have a well defined derivative in the limit (as long as the curve we want to approximate is also differentiable). What does it mean? that if you zoom in enough at any point, those curves look like a straight line. Thanks to that, even when we take the limit, the length will be consistent with the curve we want to approximate.

JCarter426
2020-12-14, 05:21 AM
This is known as the Staircase Paradox. It's the same logic used to prove π = 4.


The method of splitting the area under a curve into infinitesimally narrow rectangles to calculate the size of the area works. But it doesn't work to calculate the length of the curve.
It does, but that's not what's happening here.

An arc length can be approximated by dividing the curve into triangles and adding up all the hypotenuses:

S ≈ ∑ √(Δxi2 + Δyi2)

If we take a Riemann sum, the limit as the number of triangles approaches infinity lets us get rid of that approximately qualifier, and we get the actual arc length. The integral looks a little different, but is ultimately derived from this.

S = a∫b √[1 + f'(x)2] dx

The difference is that in the Staircase Paradox, you are summing the sides and not the hypotenuse. It doesn't matter if you do this an infinite amount of times, because you are summing something different from what you want which converges on a different value.

Another way to think about it intuitively is that what you are summing is always longer than the line. If you do this an infinite amount of times, you are still going to have something longer than the line you want.

DavidSh
2020-12-14, 06:00 AM
Another way to think about it intuitively is that what you are summing is always longer than the line. If you do this an infinite amount of times, you are still going to have something longer than the line you want.
That's an intuition you want to be careful about. After all, a(i) > b(i) for all i does not guarantee that the lim a(i) > lim b(i) in general.

JCarter426
2020-12-15, 06:39 AM
Well yes, and you can sum 1 + 2 + 3 ... to be -1/12 if you want. With infinity, any sort of intuition eventually breaks down.

But I think in this case, it's clear to see that if you are adding line segments that are always longer than the line segment you are trying to measure—if they weren't, you wouldn't have a triangle—then there is no intuitive reason for that to converge on the correct answer. Conveniently, there is also no mathematical reason.