12 October 2007

Understanding Integers

Technorati tags:

Which is the largest of these rational numbers?

  • 1.5075643002
  • 5.2
  • 5.193
  • 5.213454

If you say 5.213454, then you're still thinking in integer terms.  If you say 1.5075643002 is the largest within a rational number frame of reference, then you thinking as I am right now.

Is Pi an irrational number, or have we just not effectively defined it yet?  With the Halting Problem in mind, can we ever determine whether Pi is rational or irrational?  On that basis, is there such a thing as irrational real numbers, or are these just rational numbers beyond the reach of our precision?  Unlike most, this last question will be answered within this article.

Rational numbers

I was taught that rational numbers were those that can be expressed as one integer divided by another - but I'm reconsidering that as numbers that lie between fixed bounds, or more intuitively, "parts of a whole".

When we deal with rational numbers in everyday life, we're not really dealing with rational numbers as I conceptualize them within this article.  We're just dealing with clumps of integers. 

To enumerate things, there needs to be a frame of reference.  If you say "six" I'll ask 'six what?', and if you say "a quarter", I'll ask 'a quarter of what?'

Usually, the answer is something quite arbitrary, such as "king's toenails".  Want less arbitrary?  How about "the length of a certain platinum-iridium bar in Paris stored at a particular temperature" - feel better now?

Your cutting machine won't explode in a cloud of quantum dust if you set it to a fraction of a millimeter; within the machine's "integers", are just more smaller "integers".  If you think you understand anything about rational numbers from contexts like these, you're kidding yourself, in my humble opinion.

Integers

To put teeth into integers, they have to enumerate something fundamental, something atomic.  By atomic, we used to say "something that cannot be divided further"; today we might say "something that cannot be divided further without applying a state change, or dropping down into a deeper level of abstraction".

Ah - now we're getting somewhere!  Levels of abstraction, total information content, dimensions of an array... think of chemistry as a level of abstraction "above" nuclear physics, or the computer's digital level of abstraction as "above" that of analog volts, nanometers and nanoseconds.

If layers of abstraction are properly nested (are they?), then each may appear to be a closed single atom from "above", rational numbers from "within", and an infinite series of integers from "below".  Or not - toss that around your skull for a while, and see what you conclude.

Closed systems

Within a closed system, there may be a large but finite number of atomic values (or integers, in the non-ordered sense), being the total information content of that system.  If rational numbers are used to describe entities within the system, they are by necessity defined as values between 0 and 1, where 1 = "the system".  In this sense, 7.45 is not a rational number, but might be considered as an offset 7 from outside the system, and .45 within the system.

You might consider "size" as solidity of existence, i.e. the precision (space) or certainty (time, or probability) at which an entity is defined.  If you can define it right down to specifying exactly which "atom" it is, you have reached the maximum information for the closed system.  So 0.45765432 is a "larger" number than 0.5, in terms of this closed-system logic.

You can consider integers vs. rational numbers as being defined by whether you are specifying (or resolving) things in a closed system (rational numbers, as described within this article) or ordering things in an open system (integers). 

What closes an integer system is your confidence in the order of the integers you enumerate.  What closes a rational system is whether you can "see" right down to the underlying "atoms".

Information and energy

Can one specify an entity within a closed system with a precision so high that it is absolute, within the context of that system? 

We may generalize Pauli's exclusion principle to state that no two entities may be identical in all respects (or rather, that if they were, they would define the same entity).

Then there's Heisenberg's uncertainty principle, that predicts an inability to determine all information about an entity, without instantly invalidating that information.  Instantly?  For a zero value of time to exist, implies an "atom" of time that zero time is devoid of... otherwise that "zero" is just a probability smudge near one end of some unsigned axis (or an arbitrary "mid-"point of a signed axis).

Can you fix (say) an electron so that its state is identical for a certain period of time after it is observed?  How much energy is required to do that?  Intuitively, I see a relationship between specificity, i.e. the precision or certainty to which an entity is defined, and the energy required to maintain that state.

Entropy

If "things fall apart", then why?  Where does the automatic blurring of information come from?  Why does it take more work to create a piece of metal that is 2.6578135g in mass than one manufactured to 2.65g with a tolerance of 0.005g?

One answer may be; from deeper abstraction layers nested within what the current abstraction layer sees as being integer, or "atomic".  The nuclear climate may affect where an electron currently "is" and how likely it is to change energy state; what appears to be a static chemical equilibrium could "spontaneously" change, just as what appears to be reliable digital processing can be corrupted by analog voltage changes that exceed the trigger points that define the digital layer of abstraction. 

In this sense, the arrangement of sub-nuclear entities may define whether something is a neutron or a proton with an electron somewhere out there; the difference is profound for the chemical layer of abstraction above.

To freeze a state within a given layer of abstraction, may require mastery over deeper levels of abstraction that may "randomize" it.

Existence

What does it mean, to exist?  One can sense this as the application of specificity, or a stipulation of information that defines what then exists.  Our perspective is that mass really exists, and just happens to be massive in terms of the energy (information?) contained within it. 

There's a sense of energy-information conservation in reactions such as matter and antimatter annihilating their mass and producing a large amount of energy.  How much energy?  Does that imply the magnitude of information that defined the masses, or mass and anti-mass?  Do you like your integers signed or unsigned?  Is the difference merely a matter of externalizing one piece of information as the "sign bit"?  What do things look like if you externalize two bits in that way?

Like most of my head-spin articles, this one leaves you hanging at this point.  No tidy summary of what I "told" you, as I have no certainty on any of this; think of this article as a question (or RFC, if you like), not a statement.

2 comments:

ryanb said...

Incredible... I've been kicking around this same thought for the better part of a 7 years now and specifically "rationalized" my thinking about integers the other day, much as you have laid out here.

What does it mean for a number to be irrational and how do we represent those values? For that matter, how do we represent whole numbers? Whole numbers are really just as arbitrary as rational and irrational numbers. Our precision, or perhaps more precisely our resolution limits our ability to find an exact representation. Some numbers are irrational simply because we do not have a vocabulary that allows us to describe them.

It is the limitations in the framework of current knowlage that make us blind to other possibilities. I think it is hard for most people to accept the idea of more than 4 dimensions (3 spatial + time) because it is an intangable experience for them.

One of the first concepts that I've been challenging is the idea that division by 0 is undefined. What I believe works is that division by 0 is actually infinity. In fact, it wouldn't be too difficult to argue that lim x -> 0 of 1/x is +infinity and that lim x -> 0 of -1/x is -infinity. Left at that, we create a massive discontinuity. But what if we define infinity as the discontinuity between -infinity and +infinity. Now we have bridged both ends of the spectrum just like we do at -0 and +0.

This is distasteful because it is a departure from Euclidian geometry and doesn't allow for the Cartisian coordinate system that we use today. But it doesn't have to. Curvature k is defined as d(tangental angle)/d(arc length). As we have defined division by 0 now we can have a lim arc length -> 0 and the tangental angle would continue to decrease. At some value +0 or -0, there would effectively be no curvature. This is the realm of the Cartisian planes that we are taught in elementry school.

Any number that we can think of will exist in that coordinate system, and will be so much closer to 0 than infinity, that for all intents it will fall along a straight line. Like you, this is more of an RFC than fact... I'd certainly like to explore this further and maybe someone else already has.

Now, this is where I came in the other day, and where I see your examination of rational numbers syncing up closely with my logic. Each number, 7.45 to use your example, is defined as sum of discrete parts. Today we define each units place as posessing a unit value of 0-9; an offset 7 from outside the system if you will. In fact we are taught that this is 7/1 + 4/10 + 5/100. The reality is that it is just as appropriate to construct that rational number from 37/5 + 1/20. The sum of these pieces are just that, pieces. What previously took me 3 informational quanta to describe was reduced to 3 informational quanta. This made me quesiton the atomic nature of integers.

Look at http://books.google.com/books?id=-vPtcriflH0C&pg=PA35&lpg=PA35&dq=distribution+of+floating+point+numbers&source=web&ots=TYDpMYHYUB&sig=StBi2X9J_MaSC80vPOvfrGgjtaY

On that page, a little down the page, you will see the output from floatgui.m, a MatLab program to graph the distrubution of floating point numbers. If we have these holes in our system of floating point numbers, might we have similar gaps in our number system that we can't describe; similarly, might this be where we find our irrational numbers?

While I know why we have our limitations in floating point, I can't help but wonder if we have similar deficiancies in rational numbers. Floating point works well with numbers closer to 0, perhaps rational numbers have a similar hidden defect. For numbers close to 0, i.e. anything that you can actually compose, rational numbers serve us very well, but as you transition to numbers closer to infinity you begin to run into more irregularities.

The uncertainty falls out of this problem. The more precisely you define something, the more likely you are to find one of these areas that you cannot define. A particle that is at rest somewhere between x and x+1 will need to be defined as being either x or x+1 when it really falls somewhere in between.

This is where I get stumped. Doesn't this suggest that there is a finite resolution to everything? We are told that there are just as many numbers between 0 and infinity as there are between 0 and 1. To me this seems to suggest otherwise.

What is surprising me more is that I can seem to apply this "bifurcation" analysis to everything from math and science to philosophy and religion. I'm going to have to kick this around for a few more years, but for certain it is something that I'm going to write more about at a later date.

Chris Quirke said...

On "What does it mean for a number to be irrational and how do we represent those values?", I'd say the concept of rational/irrational numbers, as distinct entities, is false.

Instead, it's a matter of convenience as to how deeply you want to specify rational numbers. If to only 2 decimal places, then 3.457 is as "irrational" as Pi. If to zero decimal places, then you're talking integers.

The Halting Problem implies it is impossible to predict what "irrational" numbers are truly irrational, vs. rational numbers that happen to have an inconveniently high computational overhead to specify exactly what they are.

Some of that overhead may be skewed by the type of calculations you use, e.g. it may be "easy" enough to describe 22/7 as rational, but "too difficult" to pin down the same value to x decimal places if using x.xx notation rather than x/x notation.

Whereas there's no deep conceptual difference between rationals and irrationals, there is between integers and rationals, in that some contexts only "make sense" for integers.

For example, it's not meaningful to speak of half a person, given that sawing a person in half destroys the property of personhood.

On division by zero, I agree this gives the (non-)value of infinity, and that leads one to consider the nation of infinity.

Cues from approaching values suggest "very large", but some contexts are ambiguous, especially where sign is concerned.

Rather than "very large", it may be "all possible values", e.g. where the negative side of the X axis tends to negative infinity on the Y axis, while the positive side tends to positive infinity, as both sides approach zero on the X axis.

Sometimes these wobblies can be dispelled by redefining the axes, e.g. from linear to log.

Our problem with infinity is that we tend to think of it as "very large", whereas it is a qualitatively different concept. Rudy Rucker groks this, when he describes the four "mind tools" as number/measurement, space/geometry, infinity/whole and information.

IKWYM about this opening a large and general can of worms, reminding me of something I'll blog next...