Seaside symmetry
Read MoreVideo: Playing with the Panasonic GH4
I got a new camera - come look what I done with it!
Read More(Re)constructing Reality
Okay, so here are a couple of interesting videos relating to the field of photography (or digital image processing to be more precise) that I've come across over the last couple of months. Some of these have been around for a while, but I thought as a collection they were sufficiently interesting to post up on here. What I find interesting about them is the way in which they deconstruct or alter the way in which we relate to reality; slowing down time to observe imperceivable movements or reinterpreting images to reveal seemingly hidden information.
A trillion frames per second
The first is a video from researchers at MIT who've developed a trillion frame per second (fps) camera. That's correct - a trillion. So you're probably used to watching video in the region of 30 fps and that's fast enough to trick your mind into perceiving motion between frames.
However, this camera is capable of capturing light, as it travels from point A to point B. Although it doesn't seem to be able to capture the movement of individual photons, it does seem able to capture individual pulses of light, as they move across the frame or scatter as they interact with certain materials.
Interestingly, it's dubbed by it's creators as the world's 'slowest fastest camera' - despite being able to capture the speed of light, it can only record data in two dimensions and only one of these is spatial (the other is time).
So in order to record enough data to obtain a multidimensional movie, it must record the scene multiple times from slightly different angles and this takes time (up to an hour apparently). Anyway, the video below elaborates on this and features some of the incredible footage captured by the camera.
http://youtu.be/EtsXgODHMWk?hd=1
As the camera requires multiple takes to obtain enough data, it's seems that its applications are somewhat limited. However it's ability to capture light as it scatters across a scene is certainly valuable in the analysis of different materials and could even be used for what its designers describe as 'ultrasound with light'. Read more here and here.
The camera never lies?
The first time I saw this I was pretty stunned. This video basically outlines a new and simple method of realistically inserting objects into an image after it was taken (in post-production essentially). This is done without the user having to perform complex measurements with perspective or lighting - instead, with minimal annotation the user can place objects into an image and the system will work out all the necessary lighting conditions to which it should to conform to. The result, as you will see, is incredible - with the inserted objects appearing as if they existed in the original scene.
What's more, the researchers also found that subjects were unable to tell the difference between real images and images generated by their system. It's looks so good it's almost a little disturbing.
You sort of have to see it to believe it:
[vimeo http://www.vimeo.com/28962540 w=501&h=376]
You can read more about it here, or their research paper here.
Reconstructing reality?
The last two videos are also pretty smart, describing processes by which poor quality images can be reinterpreted or reconstructed using the information within them.
If you can get past the rather dry voice over, the first involves the reinterpretation of data within an image allowing one to:
"Decide later if it stays a photo, becomes a video or turns into a lightfield so you can digitally refocus"
http://youtu.be/mAS2IxieUj4
The final is one you are likely to have come across and details an extraordinary feature in the upcoming release of Adobe's photoshop series (CS6) - It's an image deblurring feature which seems to work remarkably well, able to pluck lost detail from what seems like nowhere:
http://www.youtube.com/watch?v=Q10kwKm77RY&feature=related
I definetely ran out of steam towards the end of this post.
Thanks.