Jump to content


Photo

Position Pass


  • Please log in to reply
108 replies to this topic

#1 Drazen

Drazen

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 86 posts

Posted 15 November 2009 - 08:11 PM

hey all,

I have been watching fxguidetv episode #071

http://media.fxguide...idetv-ep071.mov

and there are some cool tips done in nuke

Is there any way/script/anything to import a "position pass" out of 3D package and use it in fusion's 3D space? that seems like very cool stuff to work with

maybe over particle system?

thanx

Drazen

#2 mdharrington

mdharrington

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 194 posts

Posted 16 November 2009 - 12:41 AM

did this outta Lightwave....

just used the psd export to generate a depth bitmap....
I see in the pcustom tool it only allows for rgba to be read from the origional image....so it takes some tinkering get it to work...writing depth to alpha

somebody with more talent then me could make this work a lot better....but this is totally doable in fusion

Attached Files



#3 pingking

pingking

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 893 posts

Posted 16 November 2009 - 12:55 AM

robert zeltsch did a very nice and quick example of this stuff seen in the fxguide episode. he posted it on the fusion mailig list.

i'll attach his stuff here. hope robert doesnt mind it. do you, robert?

Attached Files



#4 Gringo

Gringo

    Associate Administrator

  • Adv Members
  • PipPipPipPipPip
  • 1,455 posts

Posted 16 November 2009 - 08:19 AM

In some cases you can use a Displacement3D.

See also a particles example from Chad:
http://www.anatomica...scope/#more-985

#5 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 16 November 2009 - 09:45 AM

In some cases you can use a Displacement3D.

See also a particles example from Chad:
http://www.anatomica...scope/#more-985


Yeah, I did a demo on that at the SIGGraph booth where I showed how we made a fully interactive 3D comp where the view of the camera and target could be precisely mapped to a volume of a colon to allow you to do virtual colonoscopy exam verification in Fusion.

The "worldspace/localspace intersection map" is the reason we made KrazyKey too. It allows you to find the intersection of 2 such maps.

- Chad

#6 Drazen

Drazen

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 86 posts

Posted 16 November 2009 - 02:20 PM

wooow thanx guys,

for sure I will take a deep look into all of that

thanx for the tips and the comps

for sure I will need some help again soon :)

Drazen

#7 protean

protean

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 230 posts

Posted 20 November 2009 - 08:51 AM

Hey, I did a quick test using Robert's examples and some P passes for the particle position. Video isn't that exciting...

http://www.vimeo.com/7722904

I wish I didn't have to use particles as they are slow and clunky but it all works in practice.

John

PS. and of course I'm not sure what IE have as their special surface position pass.

#8 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 20 November 2009 - 09:52 AM

Hey, I did a quick test using Robert's examples and some P passes for the particle position. Video isn't that exciting...


So the Locator3D is being used as a visual guide in XYZ to the color in the position pass, and you make a mask based on the pixel color distance from the sample color? Interesting, hadn't thought of that.

I'm using a setup now where we render out an image from 3ds max, pull that into Fusion, make the particles, then adjust the camera separation and convergence using the quadbuffer OGL and send the resulting 2 numbers back to 3ds max. It's a bit of a hack, but until 3ds max brings back quadbuffer OGL viewports, it's the best way to get quick checks on the stereo.

Wonder what else we can all come up with?

- Chad

#9 protean

protean

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 230 posts

Posted 20 November 2009 - 10:20 AM

I took quite a literal approach to the whole thing.

So the Locator3D is being used as a visual guide in XYZ to the color in the position pass, and you make a mask based on the pixel color distance from the sample color?


yes.

What is a minor problem is the particle 3D rendering. Is there a way of producing a point cloud with a fuse without using the pRender? I can see how to bring points from an OBJ but how to display the points is another thing.

John

#10 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 20 November 2009 - 01:00 PM

What is a minor problem is the particle 3D rendering. Is there a way of producing a point cloud with a fuse without using the pRender? I can see how to bring points from an OBJ but how to display the points is another thing.


The problem is the generation of particles or the rendering? What kind of style are you using? NGons and Points are crazy fast for me.

The workflow idea (as I can see) is to make a dead-standard EXR file, and embedding everything in channels or metadata, and not require the 3D software to export "real" meshes. So are you expecting to import OBJ's? Or do you want it to feel like you have OBJ's in your Fusion comp, but generated from the images?

There's no 3D fuses, so all we can do is make a C++ plugin. But what data can we get from the images? XYZ and UVW and ID's obviously, but what else? How much of a topology can we represent? Are displaced planes enough? What about marching cubes or marching squares? Delaunay tesselated membrane? Can we actually re-make the topology? Would we even want to? And how much use is it, considering if you only use it on 0.5% of shots, it's not as onerous to export the meshes from the 3D software.

- Chad

#11 protean

protean

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 230 posts

Posted 20 November 2009 - 05:37 PM

The problem is the generation of particles or the rendering? What kind of style are you using? NGons and Points are crazy fast for me.


The rendering I suppose. It doesn't need to be a sim so I turn pre-roll off the pRender but that means that when scrubbing the timeline the particles occasionally don't get displayed (or rendered in 3d viewport) so I need to click 'restart' on said frame and the particles appear. What I have done is put an expression on the restart button so that it updates each time the timeline is scrubbed and therefore always producing particles in the viewport.

It seems to me a but clunky because this technique doesn't strictly need particles (in theory) and therefore shouldn't need a pRender. Don't get me wrong, it's not ultra slow but it's not lighting fast for me and alas for I'm still on FU 5.21 if that makes a difference.

The workflow idea (as I can see) is to make a dead-standard EXR file, and embedding everything in channels or metadata, and not require the 3D software to export "real" meshes. So are you expecting to import OBJ's? Or do you want it to feel like you have OBJ's in your Fusion comp, but generated from the images?


Pretty much. It was cool to see IE's point cloud representation of their rendered footage in 3D and I liked the way it served a location information for post lighting, image planes and masks. It could even serve as locator reference for DOF plugins.

I don't think it's an amazing workflow enhancement of run-of-the-mill stuff but there are some previous projects I remember where this would have been quite handy in many ways. That and the guys here at work got all excited about the fxguide/IE video so I had to try some of it out in fusion's native tools.

There's no 3D fuses, so all we can do is make a C++ plugin. But what data can we get from the images? XYZ and UVW and ID's obviously, but what else? How much of a topology can we represent? Are displaced planes enough? What about marching cubes or marching squares? Delaunay tesselated membrane? Can we actually re-make the topology? Would we even want to? And how much use is it, considering if you only use it on 0.5% of shots, it's not as onerous to export the meshes from the 3D software.


You don't need to remake the topology.. you've gone too far there :) What you'd want is a stable import of OBJs/mesh and a point cache. Last time I tried to import an OBJ sequence into fusion it was painfully slow. There is probably some way of rendering deep pixel volume information so you could recreate full 3d form of probably quite reduced resolution.

On the topic of fuses... I did get one to read OBJs and another particle fuse whereby the particles are coloured by the 'style' parameter but I was having difficulty actually specifying the colour of an individual particle. Any way to do that?.. I mean you can set the position etc. of a given particle but I couldn't find any info on doing that to the colour.

Cheers

John

#12 ChadCapeland

ChadCapeland

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 1,975 posts

Posted 20 November 2009 - 06:45 PM

Run up? Were you re-emitting the particles each frame? Or recycling them? There's no run up for me, it just works. Particles only exist at the current frame, and that's it.

There's nothing I'd do in a fuse, no, but we are looking at doing 3D plugins to do animated mesh importing. But the appeal of the point cloud is that there's no exporting to worry about. Just load the EXR and that's it.

- Chad

#13 protean

protean

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 230 posts

Posted 20 November 2009 - 07:05 PM

Hmm... Maybe that's where I've gone wrong. I've been recycling the initial particles but I'm not sure why now. You're saying to generate the particles every frame with a lifespan of 1? I'll try it out next week.

J

#14 dts74

dts74

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 158 posts

Posted 23 November 2009 - 04:31 AM

Hey could you please re post the vimeo thing I cant get it load,
and how would you guys setup the position pass using maya mental ray?
I've been trying to use the mib_texture_vector set to -1
and experimenting with different settings but I dont know when its a "proper" position pass
//daniel

Hey, I did a quick test using Robert's examples and some P passes for the particle position. Video isn't that exciting...

http://www.vimeo.com/7722904

I wish I didn't have to use particles as they are slow and clunky but it all works in practice.

John

PS. and of course I'm not sure what IE have as their special surface position pass.



#15 protean

protean

    Flying Pig

  • Adv Members
  • PipPipPipPipPip
  • 230 posts

Posted 23 November 2009 - 06:16 AM

My video doesn't show how it's done only shows it working

Renderman compliant renderers will output a P pass quite easily, Mental ray not so much. Bearing in mind that the renderman P pass is in camera space so I had to 'transform' the colours based on the imported camera. It would be easier to get a world space P pass like so:

http://www.zanity.co....html#question5




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users