I'm a vfx guy and used to work at ILM (lucasfilm). This title is incredible misleading. Tech to "previsualize" objects "in camera" has been around a long time and is in pretty wide use. Avatar used it very heavily, for example.
It's important to realize that what happens in post is a whole lot more than making a 3d object and placing it in the scene. We do a lot of integration work in 2D by hand to match things like color, edges, etc. There's massive amount of simulation work on water, fire, etc. Sometimes people straight-up paint on film frames in things like photoshop.
The big win with real-time visualization is the creative control for the direction and director of photography, who are generally far removed from the final product and this can cause expensive second-guessing all around.
Lastly, it's a bit condescending to artists working on games to suggest that the painstaking work that goes into making/optimizing/QCing interactive content is something that can just happen on the fly. Sure, game engines and hardware are pretty great, but games themselves are more and more realistic because very specialized people are working very hard to make them that way.
I'm one of the engineers working on this, I even make a semi-appearance in the video. I'm not really going to say much about it because it's a little annoying that it leaked out, and by such a crappy recording too, but -
a) I think the title here is pretty good, better than the hyperbolic one the Inquirer used.
b) Your middle two paragraphs are spot on
c) Given the video, it's understandable people are focused on the "actor as a virtual character" previsualization part ('but Avatar did this two years ago!', 'our game engine does that!', yadda yadda). That's only a small part of it, and honestly one of the less interesting ones.
I don't see what you think is condescending though. There's not a lot of difference today in the skill-set of a CG artist and a games artist. Generally they're just working to different budgets. (I say that as someone who spent 15 years working on console games).
The painstaking days of game artists building models that use less than 100 verts and hand-painting 256x256 textures are gone. Now, both your CG and game-artist build super-high resolution models, probably starting with something like ZBrush, then decimate down to whatever they need with the highres asset used to generate normal or displacement maps.
Optimius Prime in the original Transformers movie used ~20x more polygons than a PS4/Xbone game would today for a similar character. That's pretty amazing when you think that the GPUs in those machines are already handily outmatched by PCs, and the performance gains Nividia/AMD bring with every hardware cycle.
Hey, thanks for the reply! It's definitely really impressive tech and I (and everyone else) are hugely excited about the possibilities.
My comment was definitely more about the media coverage than anything else. I've been a lighter/comper/pipeline dev mostly in features and have never worked in games so I can't speak with really any authority on the subject. I really just wanted to convey that mainstream coverage of VFX (and games) tech seems to really gloss over the human element.
Games and film vfx are two different industries with two similar but different goals:
- how high quality an image can I produce in ~1/30s
- how high quality an image can I produce in several hours.
The interesting thing is how different the solutions often end up being based on these constraints. Also, the continual migration of techniques from film to games.
Including an accurate lighting model? Or do you think that there's a significant inflection point before that when we're still attempting to replicate a final render using rasterizers?
(My current bee-in-bonnet, besides integrating HMDs and mocap, is realtime-ish path tracing - but that's a while away yet, still.)
My production company already uses realtime motion capture and realtime rendering - we've been doing so since we moved away from pure Machinima techniques 5 years ago.
And we're currently incorporating GPU-based semi-realtime path tracing and Oculus Rift powered VR into the mix...
While this is very ambitious, I'm sure that they will eventually settle on a similar pipeline to Avatar: render in lower graphics quality in real-time, then increase the "graphics settings" and re-render in post-production. People are used to seeing things like realistic water and hair simulations, and those things do just take time.
Check out "Gormiti Nature Unleashed". It's a CGI rendered childrens cartoon, and the rendering is so low budget that when the scene complexity gets high, it stutters, as if they're recording from realtime rendering. Don't know if that's what they're doing, though the entire look of it makes that plausible, or if it's just that they've got a fixed time budget for rendering each frame and simply skips frames for the most complex stuff.
At what point will the actors use HMDs so they too can see the virtual world they're supposedly interacting with? ...and will lag & resolution affect that behavior in a way causing an "uncanny valley", with actors responding to situations discernible milliseconds later than the audience expects them to?
Can't speak for Lucasarts, but in the case of my company's mocap pipeline, in about three weeks' time once I get around to commissioning the integration code :)
In neal stephenson's 'the diamond age', much of the acting was done by software, leaving the human actors to 'fill in the blanks' and add a human touch. You could avoid the uncanny valley caused by lag by instantaneously starting the response from software and interpolating into what is being acted.
It's important to realize that what happens in post is a whole lot more than making a 3d object and placing it in the scene. We do a lot of integration work in 2D by hand to match things like color, edges, etc. There's massive amount of simulation work on water, fire, etc. Sometimes people straight-up paint on film frames in things like photoshop.
The big win with real-time visualization is the creative control for the direction and director of photography, who are generally far removed from the final product and this can cause expensive second-guessing all around.
Lastly, it's a bit condescending to artists working on games to suggest that the painstaking work that goes into making/optimizing/QCing interactive content is something that can just happen on the fly. Sure, game engines and hardware are pretty great, but games themselves are more and more realistic because very specialized people are working very hard to make them that way.