• portfolio
  • animation
  • audio
  • store
  • about
Menu

ED PROSSER

DIRECTOR // DOP
  • portfolio
  • animation
  • audio
  • store
  • about
×
27731501.jpeg

Expanding on the power of the image

Ed Prosser October 12, 2012

Non-linear use of Multimedia

I was recently made aware of the online platform Thinglink.com - which essentially allows you to 'tag' an image and embed media from around the web - such as from YouTube and Soundcloud. Anyway this really got me thinking about some of the potential that such a platform offers - images provide a really powerful and direct way of communicating something (an image is worth a thousand words, bla bla) - and being able to combine an image with additional multimedia or information can offer a much richer experience to the audience.

For example, you could use an image as a backdrop for presenting other media (such as related video and audio), or you could expand upon an image, by tagging key areas and providing additional context with video, audio, text and other images.

I thought a lot about how this could be used in terms of story telling and perhaps even communicating science, particularly by augmenting image diagrams. There's loads of cool interactive / animated diagrams and educational apps already out there, that essentially bring textbooks into the digital sphere, but they take a lot of 'know-how' and time to develop. Thinglink offers a quick and accessible route for users to create their own interactive diagrams and multimedia packages, through which to share a rich wealth of information and also tell stories through non-linear pathways.

So I took my recent audio documentary on the vOICe technology (you can listen to it here) and I cut out sections that matched up with a diagram I found in a New Scientist article on the same subject (you can read it here).

Diagram below:

I then uploaded my clips to Soundcloud here:

http://soundcloud.com/eprosser/sets/the-voice-thinglink-samples/

and used Thinglink to embed the short sound files into the New Scientist diagram image - to produce an interactive diagram of sorts. The audio accompaniments augment the visual impact of the New Scientist diagram with some added 'context' from my documentary. Users can explore the subject at their own pace and explore the clips in any order they choose. Click here to see it all together.

This was just a really quick proof of concept mock up, using existing work - but I'm really keen to start using this platform as a way of quickly creating rich multimedia packages, which combine images, video and audio to communicate stories, ideas and information in a non-linear fashion.

In Audio, Interviews, Science Tags audio, communication, Multimedia, New Scientist, Science, Social Media, Storytelling, technology, Thinglink-com
1 Comment
horse_hms.gif

Audio feature: Oh, I See

Ed Prosser September 30, 2012

Seeing with your ears.

An audio feature I produced over the summer for Pod Academy, exploring the development of the vOICe technology and it's impact on blind users. The vOICe is a computer program developed by dutch engineer Dr Peter Meijer which essentially converts images into sound. Through training and experience blind users can learn to interpret these sounds as a sort of 'synthetic vision'. The piece explores the technology from the perspective of blind user Pat Fletcher, and uncovers some of the science and technology behind its use with it's creator Dr Peter Meijer and cognitive psychologist Dr Michael Proulx (University of Bath).

It was my thought that technology and the computer would be my way out of blindness.

-Pat Fletcher, vOICe user

http://soundcloud.com/eprosser/oh-i-see

Download it HERE

Pat Fletcher

Essentially, the software takes spatial information captured by a camera and converts this into a coded soundscape. Users can then learn how to decode this auditory signal into a visual one thanks to a process known as 'sensory substitution', where information from one sense is fed to the brain via another. Fundamentally what the vOICe is doing is re-routing information usually obtained by the eyes and delivering it through another sense organ, the ears.

Although the neuroscience and psychology behind the technology is still largely unknown, it is thought that the visual cortex is eventually recruited to process the incoming auditory information and through experience, is able to decode it as spatial / visual information. There's a great article over at New Scientist that goes into greater depth about the neuroscience behind it  - including a useful diagram depicting how the technology works.

The software is currently freely available and can be used with virtually any imaging device, from webcams to camera-mounted glasses – there’s even an android version available for mobile devices! With the increasing prevalence of mobile computing, the vOICe technology is liberating users from their blindness, allowing them to step outside and experience the world through a completely new visual perspective.

For more information visit: http://www.seeingwithsound.com/ where you can experiment with the vOICe for youself and learn more about how it works. I've also prepared a page with a collection of images as heard through the vOICe software, including some featured within the piece above.

Music

  1. Hypermagic – Start Again Start
  2. Ed Prosser – Untitled
  3. - – b31
  4. No Color – L’Aube
  5. Hpermagic – Pico Bisco
  6. Ed Prosser – Untitled
  7. Marcel Pequel – Four

Freesound Credits (freesoundarchive.com)

  1. Alarm Clock – 14262__xyzr-kx__alarm-clock
  2. Camera Shutter – 16071__heigh-hoo__nikonf4
  3. Data sound - 3647__suonho__futuretrocomputing-10-suonho
In Audio, Interviews, Radio Tags audio, Blindness, disability, hearing, interviews, neuroscience, Pat Fletcher, Science, seeing with sound, sensory substitution, technology, the brain, the voice, University of Bath, Visual cortex
3 Comments
virtual-reality-8.jpg

(Re)constructing Reality

Ed Prosser December 16, 2011

Okay, so here are a couple of interesting videos relating to the field of photography (or digital image processing to be more precise) that I've come across over the last couple of months. Some of these have been around for a while, but I thought as a collection they were sufficiently interesting to post up on here. What I find interesting about them is the way in which they deconstruct or alter the way in which we relate to reality; slowing down time to observe imperceivable movements or reinterpreting images to reveal seemingly hidden information.

A trillion frames per second

The first is a video from researchers at MIT who've developed a trillion frame per second (fps) camera. That's correct - a trillion. So you're probably used to watching video in the region of 30 fps and that's fast enough to trick your mind into perceiving motion between frames.

However, this camera is capable of capturing light, as it travels from point A to point B. Although it doesn't seem to be able to capture the movement of individual photons, it does seem able to capture individual pulses of light, as they move across the frame or scatter as they interact with certain materials.

Interestingly, it's dubbed by it's creators as the world's 'slowest fastest camera' - despite being able to capture the speed of light, it can only record data in two dimensions and only one of these is spatial (the other is time).

So in order to record enough data to obtain a multidimensional movie, it must record the scene multiple times from slightly different angles and this takes time (up to an hour apparently). Anyway, the video below elaborates on this and features some of the incredible footage captured by the camera.

http://youtu.be/EtsXgODHMWk?hd=1

As the camera requires multiple takes to obtain enough data, it's seems that its applications are somewhat limited. However it's ability to capture light as it scatters across a scene is certainly valuable in the analysis of different materials and could even be used for what its designers describe as 'ultrasound with light'. Read more here and here.

The camera never lies?

The first time I saw this I was pretty stunned. This video basically outlines a new and simple method of realistically inserting objects into an image after it was taken (in post-production essentially). This is done without the user having to perform complex measurements with perspective or lighting - instead, with minimal annotation the user can place objects into an image and the system will work out all the necessary lighting conditions to which it should to conform to. The result, as you will see, is incredible - with the inserted objects appearing as if they existed in the original scene.

What's more, the researchers also found that subjects were unable to tell the difference between real images and images generated by their system. It's looks so good it's almost a little disturbing.

You sort of have to see it to believe it:

[vimeo http://www.vimeo.com/28962540 w=501&h=376]

You can read more about it here, or their research paper here.

Reconstructing reality?

The last two videos are also pretty smart, describing processes by which poor quality images can be reinterpreted or reconstructed using the information within them.

If you can get past the rather dry voice over, the first involves the reinterpretation of data within an image allowing one to:

"Decide later if it stays a photo, becomes a video or turns into a lightfield so you can digitally refocus"

http://youtu.be/mAS2IxieUj4

The final is one you are likely to have come across and details an extraordinary feature in the upcoming release of Adobe's photoshop series (CS6) - It's an image deblurring feature which seems to work remarkably well, able to pluck lost detail from what seems like nowhere:

http://www.youtube.com/watch?v=Q10kwKm77RY&feature=related

I definetely ran out of steam towards the end of this post.

Thanks.

In Photography, Science, Video Tags "Speed of Light", Camera, MIT, photography, Photon, Photoshop, Reality, Science, technology, video
2 Comments
verticalorc2_large.jpg

Making a racket on the Soundwall

Ed Prosser September 22, 2011

So today I was very excited because I got the chance to play around with Lotto Lab's Soundwall - a mammouth interactive speaker array, currently housed at the Science Museum in London.

The wall features a total of 77 speakers and is controlled via the use of a handy touch screen interface. The interface allows the user to essentially bounce sound from three mono tracks (or up to 8 analogue inputs) across the wall in realtime, much to the delight, or perhaps annoyance of everyone in hearing distance.

The simple touch screen interface includes volume faders for each of the three tracks, which are subsequently represented by one of three coloured balls. These coloured balls essentially represent where the sound is localised on the speaker wall; so as the user moves the balls across the screen the sound on the wall pans accordingly. This allows the user to throw sound across the wall in all directions, just imagine a ball of sound bouncing about inside your head and you're almost there... It's certainly very cool to play with.

So I tried out a collection of 'racket' themed samples which I have prepared for use during next weeks Science Museum Lates:

[soundcloud width="100%" height="81" params="" url="http://api.soundcloud.com/tracks/23921324"]

... and here's a clip of me standing infront of the wall as David Robertson throws the sound all over the place, you can hear an interview with David in the next episode of Tomorrow's Tentacles:

[soundcloud width="100%" height="81" params="" url="http://api.soundcloud.com/tracks/23925050"]

Although the wall's functionality is pretty limited at the moment, it will be interesting to see how its use is expanded. It could certainly do with a few more features, perhaps expanding on the user interface to include a selection of tracks or the addition of effects which could be controlled via the touch screen. Its use as a performance tool is certainly very attractive and I'd love to experiment with it further! - Watch this space.

In Audio, Interviews, Radio, Science Tags Lotto lab, Science, Science Museum, Soundwall, technology
3 Comments

Search Posts

 

Featured Posts

Powered by Squarespace