Main

March 09, 2007

Linear images for the rest of us

I think one of the most obvious weakness of linear image files is the inherent difficulty to clearly explain them.
I've been explained many times, and I did tried many times to explain it, with mixed results.
Having a thick Italian accent doesn't help I may add (I might get away with it on this blog thought).

Yet I can't seem to give up on this: digital linear images is what got my excited again after working for over 17 years in CG, and it u brings to a full circle everything I know about art, images, light, technology and the art of making a perfect ragu' sauce. Well maybe not the ragu', although that sauce does requires a series of sequential steps that need to be religiously followed in order to achieve a superior result...
And as I want everybody to like my ragu', I'd like everyone to embrace linear images as the way to go.

Enough with the ragu now, let's get to the meat of this (pardon the pun).

When 16bit images software became available a while ago I felt that was the solution of all the problems we were having with digital images: banding, posterization, clipped whites and blacks, etc. It turned out that it to be just an improved 8bit: while banding/posterization was alleviated, clipped whites and blacks where still present.
8bit images stores blacks values as zero and whites values as 255: 16bit instead stores whites values as 32768, so you have more steps in-between black and whites to play with. 8bit and 16bit images are also usually called INTEGER formats, as they only use integer (whole) numbers.
Wouldn't be wonderful if images could have values that are darker than black and brighter than whites? Just like in real life...
That's what a 32bit linear image format (or HDR) does (and more). Besides the higher number of bits it is important to note that 32bit it is a FLOATING format.
What's does "floating" means? It means you can store numbers in this format as fractions of integers: while you may not care less about integers or floats, the 32bit floating formats in short allows an image to have darker than blacks and lighter than whites values.
Imagine a photo you shot years ago, and you realized there is some detail in the shadow that you may want to recover, something that makes you wished you shot another set with higher exposure: now imagine taking that photo in photoshop and just change the exposure to reveal everything you want to see, right into the darkest part of the image: that photo could be a faithful representation of the lighting conditions of the scene you want to capture, from the darkest shadow under the car to the brighter sun in the sky. If you could have such an image, you would use a software to decide what to overexpose and what to underexpose, trying to get back to what you had seen with your own eyes, without the limitation of traditional photos.
This is by the way what artists have been trying to do since they have been scratching graffiti on top of caves many years ago: capture for other to see what they have seen with their own eyes.
Now I don't want to give the impression that this is the ultimate tool to make art: HDR wont give you a Pollock or a VanGogh, but it may aid you to get the picture you want, the one you have seen with your eyes before triggering the shutter.
This is just a tool, and like any other tool it requires mastering. Artists for centuries have been trying to compress the vast dynamic range before their eyes on a flat surface: some of their results have been astounding, and by looking at some tonal mapping images I can see how HDR can aid photographers to get closer to their vision. I'm not saying that everyone could be a VanGogh just by using tonal mapping: I'm saying that we have a new tool in our arsenal that can get our photos to look closer to that world, which is the one we see every day.
HDR is the ultimate image format, and not just because it has darker blacks and brighter whites than a 8/16bit integer image: it also store value in LINEAR progression (actually it's most advanced feature). This is necessary in order store values which is less than zero, unlike 8/16bit integer images. So how do you display such an image? There are some $40k monitors that can display HDR natively, but everybody else will see a "portion" of the HDR image with a viewing gamma applied to it in order to display it correctly on a monitor: note that HDR images have NO GAMMA, but monitors do, so the HDR images need to have a gamma applied to it before the data it's been sent to the display.
The fact that HDR have no gamma makes the images behave like the real word counterpart: have you ever applied a gaussian blur to an image trying to replicate a out of focus image? If you did, you should be familiar with the results, which are quite mushy... On the other hand the same gaussian blur applied on a HDR image with values beyond the visible black and whites will give you a perfect bokeh.
I know there are focus good plugins, but you see these plugins are just designed try to "mimic" the behavior of real world effects, something HDR do without mimicking. It's the real thing.

See by yourself in the example below.

This is a sunset HDR image how it looks on my screen:

HDR image 

 

This is the same image after a motion blur filter with a value of 20:

 

HDR blur

 

 

This is the same motion blur filter applied AFTER converting the HDR image to 8bit:

8bit blur 

This is a kind of an extreme example, but it does the job. 

I hope this clarify the basics of HDR linear images enough to illustrate the potential of this image format without going into too much technical jargon.I believe in the near future we will have cameras that can capture a great deal of dynamic range in just one shot: there are instances where this can be overkill, like for instance in a shot of a grass field, but HDR can be a powerful tool for the photographer who is willing to explore all the possibilities of his photos without been limited to the current 7-8 stops of current digital cameras.

Here's the original HDR psd file used for the examples above.

http://giancarlolari.net/LIN/HDR_fisheye_sunset32.psd.zip 

 

March 08, 2007

About Golden rules or PHI

An image is worth thousand words.

Nerona

But two images are worth a billion:

Nerona PHI 

http://en.wikipedia.org/wiki/Golden_ratio 

Thoughts on MP, DR and 4/3rds lens system

getty Leaf

I learnt long time ago that in terms of photo equipment what really mattered is glass first, and film second.
The camera used to be that thing that connect the glass with the film. I've seen incredible photos shot with crappy cameras, as well as I've also seen gorgeous photos shot with lenses scratched and finger smeared.
In the end is the photographer who made the difference and who does STILL makes the difference.

In the digital age there is not much difference, except that the camera IS now the film (for sure not the compact flash card). The camera can also be a digital portable dark room, an equivalent of the one you have on your computer in your studio.
And like film, your camera also dictates what resolution your photos will be. As you don't shoot with 35mm format and expect it to look like a large format (well color could but resolution wont) you should have similar expectations for digital.
Similarly, the 4/3rds system will ALWAYS lags behind bigger sensors in terms of resolution, since the maximum pixel density available at any given time should be always roughly the same for all sensors.
If you need the best resolution currently available then go for a medium or large format: the 4/3rds (or any APS and similar size) is not for you.
The 4/3rds is for you when you don't care if you get few pixels more than the rest of the crop (pardon the pun) but if it gives you photos as good as you want them.
It is also for you if you believe in a near telecentric lens system which is designed for sensors instead of emulsion, and you believe that resolution (or mp) is just secondary to color reproduction and quick use, which allows more opportunity to capture that image you have in front of you.
This doesn't mean wouldn't mind more resolution in a 4/3rds camera: I do a lot of panoramas and I can always use some extra detail... maybe... sometimes I did get some pictures where I wished I had some more mp or less grain for that matter: detail is detail and after all the megapixels race is similar to the one we had for the last century over faster emulsions with smaller silver grain.
4/3rds aren't medium format, it will never be: the choice to be smaller sensor system has pro and cons: but in the end if 4/3rds gives you what you NEED, then it is what you need for the rest of your photographic career (or till you change your mind because you read it in a forum).

That's about resolution, but what about noise and Dynamic Range? Does the 4/3rds have a disadvantage there?
Sensors do have an enormous disadvantage there, but all of them are and not just 4/3rds IF compared to film. We are just not there with film DR yet, at least not with small DSLR's.
Right now we are stuck on a technological limbo: our techno substitute for film has some clear advantages, but it does lacks the dynamic range of the best emulsions. The best DSLR's have a DR with measured range of 7-9 stops, compared to 13-14 stops of a good film. Even considering that 1-2 stops software highlights recovery is possible it is a far cry from film, and you need to bracket pictures to get more DR with all the issues that that implies: mainly a complicated workflow and moving objects problems.
Of course the DR range numbers above are an approximated sample, as it is very difficult to compare the dynamic range of a sensor against film: sensor records light in a linear progression while film is logarithmic. Also, sensor dynamic range varies since the noise in the dark areas increases with heat.
The big difference is that emulsion stores a lot of light information on the extreme latitude of the exposures which gives you an image that has good color information even in very bright headlights. For that very reason film highlight looks gorgeous, way better than DSLR sensors. For photographers that share that statement it's easier to expose high contrast scenes with film than digital, unless you can afford a phase one... This is by the way also the reason why a lot of film makers still shoot movies on film.
I can get nice highlights from a DSLR, using bracketing and Photoshop, but a sensor lower noise would allows less bracketing (2 stops = 1 bracket) and less processing and therefore more shooting: more DR is something I NEED now, so I do work around the technological limitation of today's sensors by using bracketing and I do end getting what I WANT. Sometimes.


Hosting by Yahoo!