Jump to content


Photo

Position Pass From Z Depth


  • Please log in to reply
24 replies to this topic

#1 robocop

robocop

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 485 posts

Posted 15 September 2010 - 06:26 PM

so.. in ours production i have had an unexpected problem, that render can't gave us position pass :(
but one render guy say, that (theoretically) we can get this at compositing stage from Z and 3D camera information (position, rotation and focal)
and so. after several days of math and pencil (at subway by the road to and from workshop) i come with position pass from Z pass and camera info.

here is a comp with workflow and math.
actually i made this for myself.. and it is not totall solution for all cases.. what i know now - it`s not working right with cameras that have "unusual" film aspect. (like height>width).

i did test with renders from Maya MentalRay, and 3DMax scanline renderer and Fusion 3D render.

and i did "difference" tests between true Ppass and PfromZpass - precision was 0.04 order for 2048*872 32bit flow (0.08 for 16bit float). i was surprised.

With this method you could generate true WorldSpace position pass for Fusion 3D space.

and more.. i think you can use this with some tricky manner .. like with data from MatchMoving app`s - if you have good tracked 3D camera for your live-action footage and have (by some magical ways) Z depth for this footage - you automatically can generate 3D represenatation of this data. (but i didn`t try this ..thus far)


For work with it, you need veeeeeeeeeeeeery helpfull fuse (i use that aaaloot on current ours project )
thanks Anatomical Travelogue and Chad for that ("alien-brained" as some one Steve comment there ) stuff!

check this out:
comp and example footages (19 mb)
Videot'utor on vimeo.

ps: sorry for sound track, if you don`t like it :)
but.. have a !nuff


-------------------- UPD: add macros PfromZ

macros that produce WorldSpace Position pass from :
- image with Z channel
- information about camera position, rotation and AngleOfView

Attached File  PfromZ.setting   11.39KB   98 downloads

-robbo.

#2 bfloch

bfloch

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 960 posts

Posted 16 September 2010 - 01:48 AM

awesome way to start the day. great work.
I actually wanted to play around with position passes as well, therefor wanted to write a shader for fusion. right now I can output eye and local space but I'm not sure why worldspace does not work. Probably will have to bother eyeon.

up to then I can use your way. your math teacher would be proud!

#3 robocop

robocop

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 485 posts

Posted 16 September 2010 - 03:39 AM

thanks Blazej )

#4 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 16 September 2010 - 09:42 AM

Neat. You beat me to it, this was on my todo list, but this just means I can cross it off now. :) And the music is fine. :) Couple things... Did you try making the gradient not from 0 to 1, but from 0 to width/width-1 (and 0 to height/height-1). That way you don't sample outside the image (which 1,1 is). May improve the precision? BUT... you can skip that entirely by generating the gradient in the CT using the x and y variables. Might make the setup slower, but I'm not sure, since it would save you from needing the Bol.

- Chad

#5 xmare

xmare

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 723 posts

Posted 16 September 2010 - 10:08 AM

geeks!! :D

thanks a lot for the setup:)

cheers

#6 robocop

robocop

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 485 posts

Posted 16 September 2010 - 11:10 AM

you can skip that entirely by generating the gradient in the CT using the x and y variables

wow! you right! ) i didn`t thinking about that clever usage of CT in this situation :) ! thanx!

Did you try making the gradient not from 0 to 1, but from 0 to width/width-1 (and 0 to height/height-1). That way you don't sample outside the image

I fear i don`t understand imageprocessing such deep.
Why size/size-1 better? Interesting.

#7 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 16 September 2010 - 11:21 AM

Oops.

I meant Width/(Width+1).

#8 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 16 September 2010 - 11:31 AM

Here's a sample demonstrating the issue using a CT and a Txr.

- Chad

Attached Files



#9 robocop

robocop

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 485 posts

Posted 16 September 2010 - 04:34 PM

ha! now i understand !
thanx for info . good to know that subtle things.

#10 Nebukadhezer

Nebukadhezer

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 76 posts

Posted 17 September 2010 - 08:02 AM

you are just brilliant.

would you two guys be able to help with some math ?
I want to create a disparity map from the ppass, we are having ppass for each eye plus the stereo cam... so I thought we could calculate the "offset" for each pixel from left to right eye?!
But I have not really a clou how to start.. in the end it would only be the red channel that displaces content for the right eye...

I hope that was understandable :-)

and maybe it is already on your list chad ???


Cheers

johannes

#11 bfloch

bfloch

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 960 posts

Posted 17 September 2010 - 08:32 AM

Since the world position gives the absolute position in X, Y and Z you should need to look for the same pixel value of image A in image B.
Wouldn't the disparity map in pseudo code be something like:

for each pixel_in_A:
for each pixel_in_B:
if pixel_in_A == pixel_in_B:
return vector( pixel_in_A_pos, pixel_in_B_pos )


Only one piece missing and we have a poor mans occula.
1. Recreate the depth from optical flow
2. Use robocops depth to world position
3. generate the disparity map

Ok who will do the depth from moving pictures :) Isn't that possible with furnace?

#12 Nebukadhezer

Nebukadhezer

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 76 posts

Posted 17 September 2010 - 08:46 AM

hey blazej,

as we are in a full cg production we dont have that problem, it is all there! I think maybe writing a shader in 3d is even easier... but in general it would be nice to calculate a "poor mens" occula inside fusion
I think any optical flow program is able to throw out depth, I just used it once with pftrack!
took ages and was, well ,usable to some degree..

but maybe it would work

cheers
johannes

#13 xmare

xmare

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 723 posts

Posted 17 September 2010 - 08:47 AM

maybe someone with undestanding of OFlow?
using OFlow to calculate pixel movement -> bigger movement = closer to camera. a poor man's approach, which could be further refined with addidional masking/keying etc... (no solution for trees and windy weather though:))

#14 robocop

robocop

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 485 posts

Posted 17 September 2010 - 08:49 AM

unfortunately i don`t know exactly..
but not so long ago i read the good papper from siggraph about stereo, and i remember - they talk about create disparity ..

http://zurich.disney...rityMapping.pdf

#15 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 17 September 2010 - 08:58 AM

and maybe it is already on your list chad ???


Yeah, but in the "done" column. :)

Blazej is almost right with his idea, except that, because of quantization, you cannot find pixels where A = B, rather you have to find where abs( A - B ) is the lowest. Which means you can't just test, but you have to make a buffer.

As to the missing parts of Ocula, you CAN use Twixtor to find disparity between two eyes and generate a very good depth map (assuming your image is optical-flow-able). What I don't know how to do (and don't even know if Ocula knows how to do) is minimize error over time.

- Chad




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users