Like most ideas, someone has already had the gist of it a few years ago [4].
But it would be an awesome fit for Project Glass.
We could render realistic 3D models in Augmented Reality using some old concepts from movies [1] and new concepts from video games [2] [3]. Since the video capture system is always going, you could perform realtime convolution of the reflected environment of the scene in front of your eyes and use that information to light the overlaid 3D geometry; making the augmented reality image look much more realistic with accurate reflections for all different sorts of materials from shiny metals to rough plastics.
The steps are short, brutal, but conquerable.
1) Place a chrome ball in the scene to capture a partial environmental reflection map.
2) Generate multiple prefiltered environmental maps for various levels of surface roughness.
3) Light the scene with some physically based BRDFs, using the generated maps for great glory.
Some hand waving involved
For one you aren’t getting laser-leveled perfect shots around the ball, just one arbitrarily directed shot. But the resulting data should provide enough information to satisfy any glossy or rough specular reflections. In fact, if all the math is done in View Space, the arbitrary orientation of the camera may not matter.
Generating realtime PMREMs is brutally expensive on today’s hardware, so for Project Glass the IBL may have to be static (generated once when an “augmented experience” begins). And then, so long as the camera maintains a steady view vector, you can render all the geometry really nicely.
Tethering: tomorrow’s rendering, today.
The next step could be a tethered experience.
Unlike Sim City, whose PR claimed to offload complex computation from peoples PCs to remote servers — an idea so ridiculous only a news organization could parrot it with any sincerity, this *could* actually benefit from offloading.
If the glasses are able to stream a video to your local PC, then you could perform the giant kernel filtering there and send back another video stream of low-res realtime dynamic preconvoluted IBL maps. Each frame of the return video could be used to resolve ambient lighting. Add in a little AO, a dash of SO, and maybe bend the normals a bit, and the resulting rendering should look very much a part of the scene.
If the tethering is done over LAN, the latency could be as low as one or two frames (@30fps?), which means the IBL would be very consistent. Just need to keep that chrome ball in view.
[1] Terminator and Ironman Image Based Lighting (ILM pdf from Siggraph 2010)
[2] Real-time Physically Based Rendering Implementation (tri-Ace ppt R&D from CEDEC 2011)
[3] Offiline prefiltered radiance & irradiance environment map tool (Lagarde from DONTNOD)
[4] Interactive Image Based Lighting in Augmented Reality (CESCG 2006)