Recently inspired by Danny Ayer's comments on XHTML and Xanadu I need to talk about levels of abstraction.
Let's work with computers and work up the levels of abstration. At the very lowest level you have electrical signals moving through circuits. Now, in the good old days it was +5V for a logical 1 and 0V for a logical 0, I don't know what voltage the latest processors run at, but to keep heating down they have been running at lower and lower voltages. If you are ever working at a very low level like this you can attach a scope to different points on the circuit and watch the pulses go by, and see for yourself that the system simulates a digital system. Yes, simulates. All the transistors, resistors and capacitors are really analog circuits that eventually settle into one of two states. Now they settle pretty quickly, but the egdes aren't those beautiful step functions you see in the product spec sheet. And determining a 0 or 1 is actually a little fuzzy, with 0-0.2V being a 0 and 4.5-5V being a 1. When developing on a new board it is especially useful to attach a scope to different parts of the board to measure the voltage, I have spent many hours debugging a system that failed only intermittently only to eventully discover that one of the chips was just barely getting enough voltage, so it's outputs for logical 1 were right on the 4.5V boundary and sometimes fell below, so a 1 became 0 on a completely random basis.
Now let's move up a level of abstraction and look at machine code. This is the raw binary codes that a processor runs on. As an undergraduate, I knew most of the 8051 machine code and could write programs for the processor by typing in the raw hex code into a PROM programmer by hand, but only because I had two classes that used that processor, and the school didn't have an assembler for 8051s at the time. Note that when working at this level of abstraction all the world is binary, and I don't get to see the voltage levels. It's either a 0 or a 1. In moving up a level of abstraction I have given up some information, the voltage levels, to gain something else, in this case I am no longer working with the state of just one or two lines, but not I can manipulate the whole state of the processor. Note that I gave up something to move to a higher level of abstraction, but got something in return.
Assembly language adds another level of abstraction, I can put labels in my code and put goto's that jump to those labels. The assembler computes the distance to the jump destination, determines whether to use a short or a long jump, and then emits the correct machine code.
Moving up to a programming language like C, that is compiled down into machine code, I get access to higher level abstractions like for-loops and structs. Here again, like in all the other steps, I give something up. Because I am no longer in charge of the emitted machine code I don't know exactly the state of the processor as it processes each line of C. In fact most lines of C code generate mulitple statements in machine code. On the up side, I can put much larger programs together, and put them together much more quickly than I could with Assembly Language.
Now let's loop back to Danny's statement:
Text editor + HTML sits at one level of abstraction. We need to work at another level to use XHTML etc etc. No big deal. [Danny Ayers]
Now I am not just picking on Danny here, I have heard the same things from people working in RDF.
So if we are moving up a level of abstraction, from HTML to XHTML, I expect to lose someting and subsequenly
get something else in return. And there's the rub. What do I get in return for moving from HTML to XHTML?
I already know what I have to give up, I have to have well-formed documents. Thus, no leaving those br
tags open, and
make sure all your tags nest properly, etc. And that well-formedness constraint means tools. Sure you
can enter valid XML by hand, but if you want to guarantee that the code is well-formed you will need to check it against
a tool. Maybe pass the document through an XML processor, or a tool like Tidy, but either way you need a tool. And if you want to
generate not just well-formed
XHTML but valid XHTML then you will definitely need a tool like Tidy. So what do I get in return? Remember, at each level of
abstraction we got something in return.
Don't point to DOM as the answer here, to quote the DOM Level 1 Specification:
The Document Object Model provides a standard set of objects for representing HTML and XML documents...
So there is no advantage there to use XHTML over HTML. No, if there is an advantage we have to look at later specifications like XPath, XPointer, XSLT and XQuery. These are powerful technologies, but here is the rub, they aren't directly applicable to people publishing web sites. That is, if I use a CMS to publish my web site, I could tweak the templates and add a validator and have it produce valid XHTML. So what? If I just want to keep publishing my site I get no advantage. Look back the example of the C compiler. When I moved up a level of abstraction, I got to use the higher level tool and leave the lower level of abstraction behind, I got to use for loops and structs. But the current set of tools for publishing and reading web sites don't have any new or special abilities when consuming XHTML over HTML. So when I move up that level of abstraction I am not getting any benefit. Sure, as XML is out there longer, more and more CMS tools may adopt XHTML as an internal format and use the power of XSLT, XPointer and XQuery to manipulate the content, but that just isn't happening today, so the benefits of going to XHTML just aren't there.
This also gets to the core of why, in the current RSS debates, I prefer content:encoded over xhtml:body. That is, in RSS, there are two ways of offering up the marked up content of you items. The first, and older method, is to put HTML, encoded, into the cotent:encoded element. Recently, an alternate proposal has come along to use an xhtml:body element that contains the item content. The later method has the advantage that consuming tools could process the XHTML and do things like pull out links or images. The problem here is the same as for using HTML over XHTML, the pain is endured by the producer but all the benefit goes to the consumer, and right now only a slim minority of consumers at that. In this case I think it's better to stick with content:encoded and let the consuming tools bear the pain. It doesn't mean that the consuming tools need to start supporting HTML/SGML processors either, by incorporating Tidy into the consuming tools the HTML can be transformed into XHTML and then processed as such. And this way the consumer, who gets the benefit of using XHTML, also bears the burden of generating it in the first place from HTML.
Posted by Joe on 2003-05-13
Posted by jm on 2003-05-13
Posted by Joe on 2003-05-13
Posted by Jorge Curioso on 2003-05-13
Posted by Bo on 2003-05-13