The rendering is not the trick here people. All rendering works the same. You turn a pixel a certain color and render the scene.
Not having to recalculate the scene "on the fly" would save a huge amount of gpu/cpu power.
Maybe every atom has an address in 3d space and the algorithm simply has to calculate which ones are visible at your current location.
How you determine where all those points are in 3d space (which you would need to know in order to determine the next frame) would be the magic part.
It seems to me the algorithm and the data compaction would be the break thru's with this engine.
Assuming it's not a hoax of course.
|