I recently implemented a GLSL renderer backend for Doom 3. Yes, there are already a couple of backends existing (e.g. raynorpat's) unfortunately these did not run successfully on my hardware and had serious rendering and pixel errors.
These images are from the first implementation of my backend, where I had accidentally called normalize() on a vector which was almost normalized. The result is pixel-imperfection when compared to the standard ARB2 backend, and the cost of pointless normalization in the fragment shader.
You can also see the importance of running a comparison or image-diff program when implementing a new backend. Can you see the differences between the first two images immediately, with the naked eye? I couldn't.
Finally, here is the backend running the hellhole level. The black regions are areas that would be rendered by the (currently unimplemented in GLSL) heatHaze shader. Not bad for an i965 GPU.
Just for the laughs, here is what happens when Doom 3 decides to try LSD; or fails to pass initialized texture coordinates from the vertex program to the fragment program in the ARB2 backend.