01.jpg (29.71 KB. 510x287 - viewed 2 times.)
Anyone who regularly uses a video camera will know that the devices do not
see the world the way
we do. The human visual system can perceive a scene that contains both
bright highlights and
dark shadows, yet is able to process that information in such a way that
it can simultaneously
expose for both lighting extremes – up to a point, at least. Video cameras,
however, have just
one f-stop to work with at any one time, and so must make compromises.
Now, however,
researchers from the UK’s University of Warwick claim to have the solution
to such problems,
in the form of the world’s first full High Dynamic Range (HDR) video system.
HDR has been in development for some time – Sunnybrook Technologies
unveiled a High
Dynamic Range display system back in 2004, and just last year BenQ
joined a list of several manufacturers to have released HDR still cameras.
Even HDR video has been shot before,
albeit on a limited, experimental basis. What the researchers at Warwick
claim to have
developed is the world’s first full-motion HDR video system, that covers
everything from
image capture through to display.
“HDR imagery offers a more representative description of real world
lighting by storing data
with a higher bit-depth per pixel than more conventional images,”
explained Prof. Alan
Chalmers, of Warwick’s WMG Digital Laboratory. “Although HDR imagery
for static images
has been around for 15 years, it has not been possible to capture HDR video
until now.
However, such HDR images are typically painstakingly created incomputer
graphics or
generated from a number of static images, often merging only 4 exposures
at different
stops to build an HDR image.”
The new system, by contrast, captures 20 f-stops per frame of 1080p high-def
video,
at the NTSC standard 30 frames-per-second. In post-production, the optimum exposures
can then be selected and/or combined for each shot, via a “tone-mapping” procedure.
A process called Image-Based Lighting can also be utilized, in which computer-
created
objects can be added to real-world footage, where they will appear to actually
be lit by
the light given off by that footage – in one example, the light of real-world
explosions is
reflected on the sides of a computer-generated car.
02.jpg (26.17 KB. 529x289 - viewed 2 times.)
Of course, all of that extra data takes up some space – each frame is 24MB in
size, which works
out to 42GB per minute. To address that rather large quandary, the researchers
are collaborating
with HDR tech firm goHDR to develop software that will compress the HDR
footage by at least
100-fold. This should allow for existing editing systems to be able to handle
the video files.
The final step in the process is the HDR monitor. It consists of an LED
panel which projects
through an LCD panel placed in front of it. The combination of the two
screens is necessary
to provide all of the lighting information.
The Warwick team believe that the technology would be useful for applications
such as
televised sportscoverage (in which a football moves in and out of sunlight
and shadows,
for instance), conducting or recording surgery, or for security systems.
It could also find
use in feature film-making, as the researchers state that it could be used to
create 3D images
that don’t require viewers to wear special glasses