Position Pass From Z Depth
Posted 15 September 2010 - 06:26 PM
but one render guy say, that (theoretically) we can get this at compositing stage from Z and 3D camera information (position, rotation and focal)
and so. after several days of math and pencil (at subway by the road to and from workshop) i come with position pass from Z pass and camera info.
here is a comp with workflow and math.
actually i made this for myself.. and it is not totall solution for all cases.. what i know now - it`s not working right with cameras that have "unusual" film aspect. (like height>width).
i did test with renders from Maya MentalRay, and 3DMax scanline renderer and Fusion 3D render.
and i did "difference" tests between true Ppass and PfromZpass - precision was 0.04 order for 2048*872 32bit flow (0.08 for 16bit float). i was surprised.
With this method you could generate true WorldSpace position pass for Fusion 3D space.
and more.. i think you can use this with some tricky manner .. like with data from MatchMoving app`s - if you have good tracked 3D camera for your live-action footage and have (by some magical ways) Z depth for this footage - you automatically can generate 3D represenatation of this data. (but i didn`t try this ..thus far)
For work with it, you need veeeeeeeeeeeeery helpfull fuse (i use that aaaloot on current ours project )
thanks Anatomical Travelogue and Chad for that ("alien-brained" as some one Steve comment there ) stuff!
check this out:
comp and example footages (19 mb)
Videot'utor on vimeo.
ps: sorry for sound track, if you don`t like it
but.. have a !nuff
-------------------- UPD: add macros PfromZ
macros that produce WorldSpace Position pass from :
- image with Z channel
- information about camera position, rotation and AngleOfView
PfromZ.setting 11.39KB 98 downloads
Posted 16 September 2010 - 01:48 AM
I actually wanted to play around with position passes as well, therefor wanted to write a shader for fusion. right now I can output eye and local space but I'm not sure why worldspace does not work. Probably will have to bother eyeon.
up to then I can use your way. your math teacher would be proud!
Posted 16 September 2010 - 09:42 AM
Posted 16 September 2010 - 11:10 AM
wow! you right! ) i didn`t thinking about that clever usage of CT in this situation ! thanx!
you can skip that entirely by generating the gradient in the CT using the x and y variables
I fear i don`t understand imageprocessing such deep.
Did you try making the gradient not from 0 to 1, but from 0 to width/width-1 (and 0 to height/height-1). That way you don't sample outside the image
Why size/size-1 better? Interesting.
Posted 16 September 2010 - 11:31 AM
Posted 16 September 2010 - 04:34 PM
thanx for info . good to know that subtle things.
Posted 17 September 2010 - 08:02 AM
would you two guys be able to help with some math ?
I want to create a disparity map from the ppass, we are having ppass for each eye plus the stereo cam... so I thought we could calculate the "offset" for each pixel from left to right eye?!
But I have not really a clou how to start.. in the end it would only be the red channel that displaces content for the right eye...
I hope that was understandable :-)
and maybe it is already on your list chad ???
Posted 17 September 2010 - 08:32 AM
Wouldn't the disparity map in pseudo code be something like:
for each pixel_in_A:
for each pixel_in_B:
if pixel_in_A == pixel_in_B:
return vector( pixel_in_A_pos, pixel_in_B_pos )
Only one piece missing and we have a poor mans occula.
1. Recreate the depth from optical flow
2. Use robocops depth to world position
3. generate the disparity map
Ok who will do the depth from moving pictures Isn't that possible with furnace?
Posted 17 September 2010 - 08:46 AM
as we are in a full cg production we dont have that problem, it is all there! I think maybe writing a shader in 3d is even easier... but in general it would be nice to calculate a "poor mens" occula inside fusion
I think any optical flow program is able to throw out depth, I just used it once with pftrack!
took ages and was, well ,usable to some degree..
but maybe it would work
Posted 17 September 2010 - 08:47 AM
using OFlow to calculate pixel movement -> bigger movement = closer to camera. a poor man's approach, which could be further refined with addidional masking/keying etc... (no solution for trees and windy weather though:))
Posted 17 September 2010 - 08:58 AM
and maybe it is already on your list chad ???
Yeah, but in the "done" column.
Blazej is almost right with his idea, except that, because of quantization, you cannot find pixels where A = B, rather you have to find where abs( A - B ) is the lowest. Which means you can't just test, but you have to make a buffer.
As to the missing parts of Ocula, you CAN use Twixtor to find disparity between two eyes and generate a very good depth map (assuming your image is optical-flow-able). What I don't know how to do (and don't even know if Ocula knows how to do) is minimize error over time.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users