I don't know which way it will go but I would suggest that if we are searching for what exactly the web *is* we have to go further than say it is HTML, as Hugh does in this piece.
The final addition to our constraint set for REST comes from the code-on-demand style of Section 3.5.3 (Figure 5-8). REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre-implemented. Allowing features to be downloaded after deployment improves system extensibility. However, it also reduces visibility, and thus is only an optional constraint within REST.
You're barking up the wrong tree here. GWT is for writing AJAX/RIA/whatever-you-want-to-call-them applications. It makes the obnoxious JS bits tolerable. JS applications are inherently unfriendly to search engines. You can overcome that (as Ian has done) but it isn't going to be as simple as tossing an HTML file on some random web server and having it indexed.
It's picking the right range for your application to sit in that matters, and picking that range depends on how searchable you want to be.
NIST is assembling standards around SaaS, and one component of that standard is understanding the value of the data versus the risk of it being exposed. We need a similar decision framework for web applications, but in this case the reverse, the value of the data versus the risk of it not being indexed.