This could get long, and if it does, I'll break it up into separate posts, with this one being on what I mean by a "layer of abstraction". Later I'll go onto whether these are artificial contrivances of convenience or hard entities, the nature of their boundaries, the truth or otherwise of "as above, so below", and whether these differences can be tracked back to a small number of meta-level inputs such as the "good or evil" duality. I'll pepper all this with examples, each of which would be a field of its own, but here I'm interested in whatever common essence can be extracted.
We don't experience stuff as a continuum, but as mind-sized chunks that I'll refer to as "layers of abstraction". Below and above these are things that may affect that layer, but which cannot be fully described or explained within that layer alone - bringing Goedel's Incompleteness Theorem to mind.
Example: The Evolving Infosphere
At the analog volts and seconds level, it's all about transistor design and the behavior of the electronic shells of different atoms, which in turn drills down to the unique behaviors of small integers - but we don't consider that depth; it's all about what shape and size of blocks of tainted materials to join together to make effective transistors and other components.
At the more familiar digital level, we abstract out all of the analog stuff; nuances of time and voltage are simplified down to x volts = off and y volts = on, and time is pixellated into clock pulses. But this layer of abstraction is supported by the analog layer only as long as it successfully slews voltage between the "on" and "off" levels, within the timing of a clock pulse - when this fails, the digital layer of abstraction breaks down in ways that make no sense within digital logic.
We have then aggregated transistors into chips, so we no longer have to think about individual transistors; chips onto circuit boards so we no longer think about chips, boards into systems, systems into networks, and networks into The Internet. When you integrate ourselves and our code as actors within this Internet, you can consider the whole as the infosphere, with its own dynamics of function that may emerge differently to the raw inputs of original human intentions, etc. [discuss: 100 marks]
Building the Infosphere
Our individual minds can only mentally handle a certain volume of complexity, and scaling up by pooling our minds only takes us so far - as well as adding extra wrinkles in imperfect communication between these minds, as well as differences within such minds that cause them to misunderstand each other, differ in objectives, etc.
So, as we've grown the infosphere, we've done so by attempting to simplify the previous abstraction layer to the point we can take it for granted, then build up the next layer. The internal components of a computer system operate well enough at realistic clock speeds so we can ignore transistors and whether they're loose, within particular chips, or whether those chips on on the same board.
When networking works well enough, we can ignore which system a particular file is on - all systems can be blurred together to be considered as "the network". Because the Internet and networks are built from the same TCP/IP materials, it's tempting to treat them as the same, ignoring a fundamental difference to our cost; entities on a network may trust each other, but those on the Internet should not!
A set of technologies allows us to melt the edges between systems and networks further; communications tolerably as fast and cheap as internal data flows and storage, effective virtualization of OSs as if they were applications, tolerably effective isolation via encryption, tolerably seamless load distribution and failover between systems and networks (where already, those two words approach interchangeability). And so we have "the Cloud", which seagues undramatically into AI and The Singularity. [discuss: 10 marks - it's really not that big a deal]
Other examples of layers of abstraction are visible light within the full electromagnetic spectrum, rhythm and pitch within the range of sound frequencies, and the micro/macro/astro-scopic scales. All of these are based on the limited focus of our senses, which we've artificially extended.
Another example; chemistry, with nuclear chemistry below and biochemistry above. This is a tricky one, because the floor of this abstraction layer seems hard and natural (and interesting - we'll likely come back to that later if we further consider the uniqueness of small integers) while the ceiling is more a matter of our mental limitations, plus the chaotic way that new complexity emerges (and more on that later, too!).
Consider written language; at its base is a combination of two symbolic layers of characters, and the text that can be constructed from these. It would be interesting to compare the information efficiency (simplistic metric; .zip archive size?) of a rich character set vs. longer words of simpler characters, which is similar to the RISC vs. CISC arguments of the 1980s. That processor debate appeared to be QED in favor of Intel's CISC, but is now re-emerging with the rise of ARM at a time when our needs and capabilities have changed.
Number theory examples abound, and is probably the best place to test predictions of closure (Goedel) and emergence; real numbers, rationals, integers and so on, and the nature of the "infinities" as expressed within these number systems.