Tuesday, November 30, 2010


Lots has happened with project and I can't really put it all in one post, so I'll break it up to have it make more sense.  This post will be a bit more of a tutorial for my post-production workflow, so for those that this does not concern, you may just want to stay tuned for upcoming posts.

When I last left you, I had done a few test using blue/red anaglyph, using a demo of Dashwood's plugin Stereo3D Toolbox. This is a pretty comprehensive 3D post-production tool, and has been recently updated.  They have two versions available, a lite version with a basic toolset at a 99$ price point, and a more pro version with most of what you would need for 3D post-production, but at a 1500$ price point.
 This made it unattainable for me and so I went looking for other solutions.  It turns out that it is very easy, if only a less elegant, to create 3D in Final Cut Pro, or any other NLE.  I start by finding the synch point (clap) on both clips, set and in on both clips, then superimposed them on the timeline.  From there, I would change the Composite Mode for the clip on V2 to "add".  To do this simply right-click on the clip, go to composite mode from the drop down menu, and choose add.

After that, apply a Channel Mixer plugin from FCP, and for the right eye keep only the blue channel active, and for the left channel, keep only the red.  From there, you only have to adjust the convergence of the images, using either the Wireframe or the Basic Motion parameters.

This creates a functional red/blue anaglyph, that may take a bit more tooling than using a preset plugin, but is nonetheless functional and pretty straightforwards.

I have also discovered a much better solution to red/blue anaglyph with ColorCode3D company.  They produce special anaglyph glasses (and some software) that uses blue/amber filters instead of the traditional red/blue.  The main advantage to this system is color.  Most of the chroma information is kept using the ColorCode3D as the combination of blue/amber re-creates most of the color in the original image.  The other advantage is assuming you have tight convergence in your layers, the image can also be viewed in 2D in what amounts to full color.  I used the preceeding method to create a ColorCode compatible anaglyph, creating a yellow/amber left eye instead of a red one.

They sell paper glasses for pretty cheap, and they are also available for free from the NFB.  I also believe that these where given out during the Superbowl to watch 3D commercial in the US, and here in Canada they where given out by the CBC to watch the Queen in 3D... slight cultural difference.

Friday, November 5, 2010

It's been while...

Hello readers,

Ok, so it has been a really long time (a year!). Speed has never really been a part of my artist process, and in the meantime many life events came into the fold, but nonetheless, much progress has been made with the piece.

First off, the piece has mutated into a live performance using VDMX and Ableton Live. To control the software, I am using both a midi controler from Novation, and Touch OSC by Hexler.net on my iPhone, as well as using Soundflower to pipe audio from Ableton to VDMX, and using that input to modulate the picture.

Over the years, I had a growing desire to explore live audio/visual performance, and in the lapse of time since updating this blog, I've been busy exploring the possibilties.  I really like the performative aspect of playing live, developing and rehearsing a set, and the "danger" involved in a live context.

I know this post is a bit a digression given that this is supposed to be about DIY 3D, but I thought I would (finally) update this page with the work that I have been doing, and explain the how the piece has evolved. I will be back very shortly (promise, it's rendering right now!) and explain the developments in regards to the 3D.  In the meantime, here's some footage of a few live performances I did over the last little while. Enjoy and thanks for your patience.

Tuesday, November 3, 2009

3D footage, and it works!

Hi all,

Here are some images!!  The material is gorgeous, thanks to the amazing and keen of Charles my DOP, and moreover it works in 3D!  This is always the worry as we were flying blind, but I am happy to report that my first tests are conclusive.

I made a quick and easy anaglyph (red and blue glasses) using a beta plug-in from Sheffieldsoftworks.  It does a decent job, lets you slightly shift one of the images to compensate for inter-ocular distance (lens separation), but is nowhere near as customizable as I would want.  Anaglyphs in general are far from what I want as a final product, but they make it fairly easy for people to experience the footage as all you need is a pair of those red and blue glasses, the kind that you find in cereal boxes.  I used a pair from The Young Indiana Jones Chronicles from 1992!
 -(If possible, view clips in HD and fullscreen to get a better 3D experience)-

 This clip doesn't work as well as the previous one because of the proximity of the children to the cameras, and the resolution through YouTube. You get a ghosting effect where two images are perceived instead of one.  To fix this, I need to further shift the images in post-production. The effect of depth is felt nonetheless.

I'll post a whole series of tests and reports on what worked and what didn't very soon, but in the meantime, here is some of the un-altered material in 2D, to give you a better idea of what it really looks like.

Music: My Girls by Animal Collective

Wednesday, October 28, 2009


OK!  Some major developments: I shot a movie! 

It was long process of gathering an amazing team, some good equipment, setting aside some time, and then going for it.  It was an amazing day, we had beautiful weather, and a great bunch of people.  We shot in two locations, both about an hour outside of Montreal.

Due to the amazing resourcefulness of my talented DP, we had two Sony EX-1s, a Satchler video 25 tripod, a steadycam and shoulder mount rig.  The EX-1s are really nice cameras, they shoot a true 24P on 1/2 inch chips and record full frame HD onto SxS flash cards.  They are especially good for insuring consistency over both cameras because they have proper markings on the focus, zoom and aperture rings, allowing the DP to check the optical setting with a quick glance.  Where I might run into problems with the EX-1s is the wide separation between the lenses, much wider than the average human eye.  This can cause strange image distortions and miniaturisation, where everyting appears smaller or further away than it really is. (I recommend reading the whole article, very informative about shooting 3D in general)

We also had a strereo shotgun mic, and a pair of binaural mics, which gives an incredible result when played back on headphone.

A binaural clip of Jessica with the children - please listen on headphones for effect.

The entire process was trial and error. I followed advice gleaned from people with experience in 3D and the internet, but it remained a novel experiment for everyone on the shoot.  And I embrace this process where "mistakes" in workflow can lead to interesting results. This for me is where the experimental nature of the piece comes in.  It was truly an experiment, from technical aspects of camera placement, focal length, camera movement, to the challenges of working with child actors.  From the start I was interested in testing the boundaries of 3D, how it works, where it breaks down, and if the results don't 'work' in a traditional sense, I will try to play with them and hopefully steer the piece into an interesting direction this way.

In closing this post, I would like to quickly address the DIY-ness of this piece.  Some people might disagree with my branding of DIY to the shoot, as it was done with professional equipment and crew, instead of building and soldering something together by myself.  But for me, DIY is about getting it done, any way possible, and as best as possible. For the longest time I hesitate due to finacial considerations, applying to grant programs and waiting around for it to fall from the sky. But then I took stock of the available resources, and pushed ahead, no matter what would come out of it or how it would be shot.

The only reason I have any images at all, let alone that it took on such professional proportions, is because of the amazing people that helped me.  It would have been impossible without everyone there, and so I extented a very sincere thank you and dedicate the rest of this process to you all.

Stay tuned for footage...

Tuesday, October 13, 2009

Inside the box

I realize I'm a little overdue on elaborating on the piece itself, but I've been busy trying to organize the shoot and am focusing all my energies there.

I can summarize the work as a loose narrative depicting a character’s memory of a group of children running trough fields, carefree and playful.  The children happen upon something shocking, something terrible, and they flee in terror.  The piece is experienced from a first-person point-of-view, as the character becomes trapped in the moment, lost in memory.

In my work, I constantly return to memory as focal point.  I am fascinated with how memory is created, and how it evolves over time. I am especially interested here on the point of contact: the moment of creation of memory. In this piece, I focus my attention on the mechanics of perception and how this sensory information gets processed into memory. 

Our existence is mediated moment by moment by what information we input and how that information is processed.  These two factors dictate the form and importance of the resulting memory, or lack thereof. In moments of primal emotion, physical sensations, the inputs, are pushed, even forced directly into memory.  Memory almost always changes over time as our perception shifts or the memory fades, but when created under duress, memory may retain more of its raw emotional and sensorial data. 

In this work, I want to exploit as many environmental inputs in the viewer as possible by using 3D, 2D, surround audio and a first person immersive environment.  I want to play with perception, explore its mechanics and its limits, to see where and how it breaks down. I want to slip the viewer’s perceptual footholds, and in doing so, immerse them in a fictional memory, seared into the main character’s psyche by the brute force of the situation.

A light, cheery story just in time for Halloween!

Opcodism art by dumpanalysis.org

Friday, October 9, 2009


This is really an exploration, a trial and error endeavor, and so a learning process. So far I have a some great teachers, a really nice and knowledgeable senior 3D animator from the NFB called Peter Stephenson has been giving me some tips and advice over email, and last night, I met with my good friend Kieran Crilly, who is wealth of technical knowlegde and general advice about all things filmic.  Turns out he has been researching 3D production for over a year now and he really opened my eyes on allot of points, a true schooling. He is also amazing at just bouncing ideas off of, you give him an idea, and he will expand on it ten times.

Among the things he made me realize is convergence: that the cameras need to be slightly angled in  so as to converge on the same point, and this is where the crux of the 3D effect will be felt, the part of the image that will be "pulled out".

Because I'm working in an installation where each eye has it's own "screen", I will keep researching to make sure this is case, but it does make allot of sense and concures with the litterature I've been reading: stereoscopy.com

I was assuming that having the cameras parralel on the same horizontal axis would be enough, but like I said, I will keep on researching.

The other big thing that came out of our meeting is a change in plans for gear: no 5Dmk2. As I had hoped, Kieran has researched and has some experience with the 5D and it seems like one of the big draws of shooting with a ful-frame sensor, a very shallow depth of field, doesn't make good 3D. He explained it in way that made total sense to me: that shallow depth of field, a long standing filmic convention, is used to simulate depth. It cuts out the subject from the background, where in 3D, you don't need to create a sense of depth, there is depth!

Depth of field in the 5Dmk2 seems challenging to control at the best of times, especially when dealing with movement as the subject can very easily slip in and out of focus.  I'm certain that it could be successfully used for this project, but I'm dealing with too many other variables that require special attention: 3D, using child actors, short time frame, that to incorporate an imaging system that requires it's own special care and attention is a little too much for this shoot.

I'm not sure what I'll use instead, but I am very glad to have these amazing people involved and to be learning so much.

Tuesday, October 6, 2009

The beginning.

This blog will chronicle my DYI project to shoot a 3D video installation, with little budget, and a very limited time frame (max one month), and do it in the most high-rez quality possible.

Image from: Jesse Ferguson

Why and how?

Why 3D and why the rush?
In my work, I have developed this curiosity with perception.  How we input stimulus, especially visually and audio stimulus; how it functions and where it breaks down.  3D adds not only an extra layer to play with (as well as an extra technical challenge) but it mirrors our own perception of world.  I want to experiment with it, see how it works and where it's limits lie.

And the rush, well, it's because of location - and I'll elaborate on the 'story' later- but it is to be shot in a corn field, and as they are to be harvested soon, the clock is ticking.

Which is ironic because it's a project I've had in the works for years.  I've worked through the theoretical aspects, wrote about it, applied for funding but nothing ever came out of it, so I decided to do it anyways, on my own and with my own means.  So I started researching, poking around the net for idea of how I can do it, and then just this weekend, decided that it was now or never.  And if I want to do it in this location, it's in the next 3 weeks or never.

Some test footage on location scout with a DVX-100A.
Music: Blood Rainbow by Tim Hecker

The How.

A 3D funhouse-warehouse: 3dstereo dot com

After some some research, I think I decided to shoot using two Canon 5DmkIIs, with their beautiful full-frame, 1080p resolution, and their compact, lightweight form.  I'm am not 100% sold on this format, as it seems to be, for lack of a better term, an "in-between" format, i.e., a really good still image camera offering some very decent video options, but when you are coming from pro video cameras, it leaves some things to be desired.  I will keep you posted, but this seems like a pretty sweet solution for now.

I will use the Lvshi Twin Camera Bar pictured above to ensure proper camera placement and the whole rig will be placed on a steadycam.

I will detail more on the shoot and elaborate on the actual project later, but for now I will leave with thoughts on how everything has been already done/we are all one sentient, conscious being that share thoughts/the internet is a crazy place.

The final form of the installation will be based on a modified, ninetieth-century stereoscope, but instead of the stereo cards that come with viewer, I will install some kind of LCD screen.  I proudly came up with idea all by myself, but low and behold, a little searching on the intertubes reveals it has been done before, by two different people!!

This was very surprising and, at first, a little annoying because I really thought I had something novel, but I quickly became excited because it proved that the project is technically feasible, and gave me some ideas I would have never came up with on my own.  Like using a PSP for a screen, in the picture below from retinalrivalry.com, or  a full step by step instruction of a similar project from Jesse Ferguson pictured above.

My piece will of course have it's own form/style, but these are great inspirations and actually not too far from what my final product will look like.

Stay tuned as I detail what is to be created to go inside the box.