[HN Gopher] Why dark and light is complicated in photographs
       ___________________________________________________________________
        
       Why dark and light is complicated in photographs
        
       Author : smbv
       Score  : 91 points
       Date   : 2022-03-13 15:29 UTC (7 hours ago)
        
 (HTM) web link (aaronhertzmann.com)
 (TXT) w3m dump (aaronhertzmann.com)
        
       | whoisburbansky wrote:
       | The tone of the article stands out to me; not much in the way of
       | exaggeration or opinions on "the best" way to do a certain thing,
       | but an almost dispassionate tour of some of the different ways
       | we've thought about light and dark artistically, over time.
       | Thoroughly enjoyed the read!
        
         | twoWhlsGud wrote:
         | Agree on the nice historical summary. The end of the final
         | paragraph struck me in particular:
         | 
         | "My photos are a collaboration between me and the algorithm
         | designers and camera manufacturers, and they reflect a
         | combination of aesthetic decisions made by each of us."
         | 
         | The one downside of taking pictures with such a strongly
         | opinionated technology is that your pictures are going to look
         | like everyone else's (or increasingly just weird). Ansel Adams
         | developed his darkroom technique over decades - and as the
         | author mentions, applying them took hours of meticulous labor.
         | So his output really did look different from most other
         | peoples'.
         | 
         | You can still differentiate your photographs on the basis on
         | their subject matter, of course. But if everyone is shooting
         | out of the same algorithmic pipeline, making your pictures look
         | better technically is going to be increasingly hard. (And given
         | current limitations, someone knowledgeable in the field today
         | can still usually tell the difference between photographs taken
         | with good lenses carefully deployed and cell phone output. But
         | it's unclear how long that will be true.)
        
           | goldenkey wrote:
           | Lenses can be added to cell phones through 3rd party
           | attachments. And aren't these ML options able to be turned
           | off? It should make the phone camera much more like a
           | standard camera.
        
       | marban wrote:
       | And one day you'll look back at all your Jpegs and wonder why you
       | messed up all your memories with tasteless HDR effects.
        
         | nyanpasu64 wrote:
         | HDR capture (preserving more than 8 bits of light level,
         | preserving meaningful detail in the shadows and not clipping
         | highlights) preserves more of a scene than non-HDR JPEGs. Tone-
         | mapping is the "tasteless HDR effect" which I have mixed
         | feelings about (and can absolutely be done poorly, resulting in
         | light halos around darker objects).
        
       | antattack wrote:
       | I'm surprised article claims that photography and film are no
       | longer 'racist'. Most of my photos are underexposing dark
       | complexions. It might be to do with limited dynamic range though,
       | but still exists.
        
         | Ma8ee wrote:
         | Is it dark complexions or everything dark. It might be that it
         | is only when it is a a face you notice the lack of details.
        
           | wizzwizz4 wrote:
           | "Racist" is being used metaphorically. The choice of film
           | chemistry to use will affect what sort of faces are captured
           | well, and what sort of faces are captured poorly.
        
         | rkuska wrote:
         | Roy DeCarava has one of the most beautifully developed prints
         | of black people. Here is an interesting article on that subject
         | https://www.nytimes.com/2015/02/22/magazine/a-true-picture-o...
        
       | antiterra wrote:
       | Talking about film is particularly complicated, as film does not
       | have an entirely linear response to light. This is called
       | reciprocity failure and means that you often need to expose way
       | longer than 2x the time to have the effect of 2x the light.
       | 
       | For digital, the data directly from camera sensors almost always
       | needs some correction, de-mosaicing or massaging to generate an
       | image viewable on a screen. This requires the camera to make what
       | ends up being an aesthetic decision on what the photo looks like.
       | Detail isn't just how bright or how dark, but also the available
       | gradients in between. This means there are cases where the
       | dynamic range is automatically expanded (instead of clipped) and
       | contrast unnaturally increased in order to have a photo that
       | isn't just mud.
       | 
       | Ultimately, this means that technical considerations map directly
       | to artistic ones, and there is no objectively correct image from
       | sensor data. The idea that a 'no filter' picture conveys some
       | kind of divine truth is a myth.
        
         | leephillips wrote:
         | Good point about the meaninglessness of a "no filter" concept.
         | The article is very interesting in regard to lightness mapping,
         | and there a another, related subject of hue mapping. This is a
         | good part of the reason why pictures from different camera
         | brands look different.
         | 
         | Your description of reciprocity failure is not quite right. The
         | idea is that if you double the time the shutter is open and
         | also decrease the aperture by one stop, you should not change
         | the amount of light hitting the film (you will change other
         | things, of course). The overall brightness should be the same
         | when you make these "reciprocal" adjustments. This does in fact
         | hold pretty well within a certain range of shutter speeds.
         | Reciprocity failure occurs at longer or smaller speeds, where
         | the reciprocal relationship doesn't quite work.
        
       | markdown wrote:
       | Quite a topical article, considering the controversy around how
       | they made a potentially great Batman movie unwatchable by turning
       | large parts of it into a radio drama where you can't see anything
       | and have to figure out what's happening with nothing but your
       | ears.
        
         | unfocussed_mike wrote:
         | Oh this is the most perfect description of the second worst
         | trend in cinema.
         | 
         | The worst trend being loud music and mumbled dialogue that make
         | it impossible to experience even as a radio drama.
        
       | ElephantsMyAnus wrote:
       | It seems to me there must be some kind of error in color
       | calibration of most cameras. They make shadows much darker than
       | they are, and bright areas much brighter than they are. It's not
       | from a lack of dynamic range.
        
         | unfocussed_mike wrote:
         | How is it not from a lack of dynamic range?
         | 
         | Colour transparency film (e.g. Fuji Velvia -- RVP50) shows the
         | same thing as clearly.
         | 
         | You basically can't map real world light into the dynamic range
         | of a typical camera without causing some of this experience,
         | can you?
         | 
         | The question is how you determine how dark shadows should be --
         | your brain is doing a lot of work to hide from you the tricks
         | it uses to make shadows appear less dark than they might be
         | with a linear response.
         | 
         | Or even how they would look with a non-linear response that is
         | even across the "frame"; your brain is doing localised
         | dodge/burn type work, constantly.
         | 
         | Camera manufacturers have "tastemakers" for this stuff on
         | digital, just as film manufacturers used to have them for film.
        
           | tomc1985 wrote:
           | There's something off about the brightness sensitivity
           | curves, if I can dial shadow controls way way up and salvage
           | an otherwise botched, underexposed photo, why is it that I
           | have to do so manually?
           | 
           | The dynamic range is clearly there. And we're not talking
           | about such ridiculous values that the sensor noise becomes
           | prominent.
        
             | unfocussed_mike wrote:
             | > why is it that I have to do so manually?
             | 
             | You can do that correction in that situation because you've
             | looked at the image, you _know what it is meant to be_ ,
             | and you can decide on a set of adjustments that produce
             | something that approximates what you want, perceptually.
             | 
             | But without truly extensive scene knowledge, cameras can't
             | do that automatically, and they also can't know what
             | information that is important to the photographer that
             | they'd be affecting if they did.
             | 
             | Cameras have to try to ascertain what would be middle grey
             | in a scene and then apply a general purpose tone curve to
             | an image, but they do not know what is _in_ the scene.
             | 
             | They can't even know for sure if the photo they are being
             | asked to take is properly exposed by any absolute
             | definition, in fact.
             | 
             |  _[I cut out a lot of this because I don 't think it's
             | going to be easy to complete the explanation here]_
        
               | ElephantsMyAnus wrote:
               | No, the problem is VERY OBVIOUSLY more severe than that.
               | It's really as if the images were treated as linear,
               | which they are not. (they use gamma correction)
        
               | wonnage wrote:
               | gamma correction is compression, sacrificing data in
               | regions where the eye is less sensitive for more
               | precision in the sensitive ranges. images would look the
               | same without it, you'd just be wasting bits encoding
               | differences that the eye can't see
        
               | unfocussed_mike wrote:
               | Honestly, whatever your understanding is here, you should
               | probably build a demonstration to get it across to
               | people.
               | 
               | Have you ever shot photographs with a colour transparency
               | film?
        
               | dagmx wrote:
               | This is also incorrect and trivializing of the color
               | science. Images may use gamma correction, they may not.
               | Trying to describe it in terms of gamma is like trying to
               | describe food in terms of saltiness alone. You're
               | ignoring tons of other factors.
        
           | ElephantsMyAnus wrote:
           | It is different because it is obvious that shadows are darker
           | than in reality while highlights are much brighter than in
           | reality.
           | 
           | Any brain filtering would have to affect the photos as well,
           | even if it was true.
           | 
           | No common image format uses linear response. It would explain
           | this problem if cameras treat them as linear.
           | 
           | Maybe they should just make the cameras take physically
           | correct colors, instead of relying on people, as the typical
           | person will always choose extreme contrast that will make the
           | camera unusable. (and can be easily increased in editing)
        
             | LocalH wrote:
             | > Any brain filtering would have to affect the photos as
             | well, even if it was true.
             | 
             | Not when the dynamic range of reality is much greater than
             | that of photographs, and your visual system is one of the
             | best visual processors in existence. It's like reducing a
             | 24-bit image to 16-bit - the image is "good enough" to
             | identify the subject, but it is quite lossy. Photography
             | itself is a lossy process.
        
             | dagmx wrote:
             | What is "brain filtering" and why would you think either
             | film or digital can reproduce the same visual effect as our
             | eyes see?
             | 
             | Our brain does a perceptual aggregation of multiple frames
             | and inputs. This is not how cameras work.
             | 
             | Also "make cameras take physically correct colors" is
             | impossible unless you're talking about spectral capture,
             | which is orders if magnitudes more complex. If you're using
             | just RGB photosites AND RGB displays, there is no such
             | thing as physically correct colors. Everything will just be
             | a mapping at best, with the best that color science experts
             | can actually provide.
        
               | ElephantsMyAnus wrote:
               | The one I was replying to talked about brain processing.
               | Whatever it is doesn't need to and shouldn't be
               | reproduced in photography as the protograph gets
               | processed just like everything else when you look at it.
               | 
               | Reality --> eye --> "brain filter"
               | 
               | Reality --> photo --> eye -> "brain filter"
               | 
               | Cameras should only record the colors as accurately as
               | possible. Or if you want to nitpick again, so that the
               | photo stimulates the eye receptors identically to
               | whatever was captured.
        
               | [deleted]
        
               | dagmx wrote:
               | they already do that to the best of our abilities.
               | 
               | Color is incredibly complex. It's easy to say "we should
               | capture it as accurately as possible" but I don't think
               | you fully comprehend the high complexity involved.
               | 
               | Your concept of matching eye receptors is wrong too.
               | Color is perceptual and subjective. Your perception of
               | color is based on your upbringing, your genetics, your
               | environment, your own mental faculties, your mental state
               | etc... What is accurate? Your eyes see some spectral
               | energy, your rods and cones convert those to signals,
               | your brain then adds that into an aggregate set of
               | information that it's constantly infilling and, most
               | importantly, guessing about.
               | 
               | You can't guarantee that multiple people see color the
               | same.
               | 
               | Now even if a camera could hypothetically capture an
               | image accurately to the real world (IMHO only possible
               | with a hypothetical full spectrum sensor), how would you
               | store it? The second you convert it to RGB data it needs
               | a perceptual conversion to the bit depth of the data
               | format. Now even if you have a file format that can
               | efficiently represent this, you'd also need full spectrum
               | displays so that we could beam that exact color to your
               | retinas.
               | 
               | Color science is incredibly complex. You're trying to
               | trivialize it into matching your own narrow perception of
               | color.
        
             | unfocussed_mike wrote:
             | If you have never done this before, I absolutely recommend
             | -- while it is still possible to do this in a practical way
             | -- getting a cheap film camera, getting hold of a proper
             | incident light meter (like a Sekonic L-208 or L-308), and
             | shooting some Fuji Velvia 50 or Provia 100F. Or if you can
             | find it, some modern Ektachrome.
             | 
             | For example you might want to go to a beach or a park and
             | shoot throughout the day on a bright day. Put people or
             | objects in the foreground and then shoot them with either
             | the light behind you or in front. Use the incident metering
             | dome to meter the light
             | 
             | (you'll need to look this up, but the broad point of it is
             | you stand in the same light as your subject and point the
             | meter into the light, rather than at your subject)
             | 
             | Once you see what transparency film does in high-contrast
             | situations I think you'll better understand what I'm trying
             | to get across.
        
             | nyanpasu64 wrote:
             | I don't think shadows are darker than in reality, but
             | instead don't have their detail captured, or get swamped
             | out by high black levels on screens or glare in the viewing
             | environment. Also highlights get _clipped_ at a much lower
             | level than in reality (photographs of suns aren 't eye-
             | searing unlike the real thing).
        
             | [deleted]
        
         | user-the-name wrote:
        
         | marban wrote:
         | The error is the photographer. People just got used to ultra-
         | correction of modern-day phone cams.
        
         | Ma8ee wrote:
         | That is exactly what is happening when you lack dynamic range.
         | 
         | Say that your eye is sensitive from light intensity 0 to 100 in
         | some units, but your camera sensor only handles 40 to 60. That
         | means that everything under 40 will be mapped to black, and
         | everything above 60 will be mapped to white.
        
           | [deleted]
        
           | ElephantsMyAnus wrote:
           | No, that does not make any sense. That should result in 40,
           | and everything darker resulting in 40, while 60 and
           | everything brighter resulting in 60. But what you can see is
           | that 40 results in 0, and 60 resulting in 100. That should
           | never happen unless there is an error in processing.
           | 
           | Only the picture file format should limit what range you can
           | save with any modern camera.
        
             | wizzwizz4 wrote:
             | It does make sense. 40 is black, and 60 is white.
        
               | ElephantsMyAnus wrote:
               | No it doesn't make sense. You should not be able to
               | capture anything darker than 40, or brighter than 60 if
               | you are limited to 40-60. (actually by the file format,
               | not the sensor, sensors today have higher dynamic ranges
               | than 8 bit sRGB) It should not turn 40 into 0 and 60 into
               | 100.
        
               | wizzwizz4 wrote:
               | In real life, a logarithmic brightness scale (which is
               | how human perception works) goes from negative infinity
               | (zero energy) to positive infinity (infinite energy) -
               | excluding both endpoints. 0 is not the bottom, and 100 is
               | not the top.
               | 
               | In real life, photographs are printed on paper. The
               | brightness of light reflecting off paper depends not only
               | on the colour of the paper, but on the brightness of the
               | illumination. (Likewise, photographs displayed on a
               | computer monitor depend on the screen's brightness.)
               | 
               | In real life, human brightness perception depends on the
               | brightness of the environment. An LED can look bright in
               | the dark and dim in sunlight, and range dim to medium to
               | bright on a cloudy day without anyone really noticing
               | that the clouds between them and the sun are thicker or
               | thinner.
               | 
               | In real life, there _is_ no 0. There _is_ no 100. _Your
               | comment_ doesn 't make any sense.
        
               | unfocussed_mike wrote:
               | Right. Metering is even now with scene programs and AI
               | still basically a complicated negotiation about
               | establishing middle grey -- when there may be no
               | perceptual middle grey in the scene at all (black cat in
               | coal bin, polar bear in snow)
               | 
               | The narrow band of sensitivity of a film or sensor has to
               | be sort of moved to where it is needed (by controlling
               | how much light gets in or for how long) according to the
               | result _the photographer is likely to want from their
               | photo_.
               | 
               | Even the most basic of film dead-reckoning methods --
               | Sunny 16 -- relies on subjective input from the
               | photographer:
               | 
               | https://en.wikipedia.org/wiki/Sunny_16_rule
               | 
               | And it's up against the nature of human perception of
               | light and dark, which as this classic page demonstrates,
               | is complex:
               | 
               | https://scienceinfo.net/video-chessboard-illusion-
               | confuses-p...
        
               | ubercow13 wrote:
               | It's trivial to take an image editor and any existing
               | image that is as you describe, and adjust the black point
               | to 40 "percent" and white to 60%. It won't look more
               | correct or realistic at all.
        
             | Ma8ee wrote:
             | That of course depends on how you show it on the screen.
             | You can of course show those part of the sensor that didn't
             | register anything (less than 40) as grey, and everything
             | than saturated the sensor as a bit lighter grey. But people
             | don't tend to like the look of those pictures very much,
             | and they definitely don't look more natural than the
             | conventional processing.
             | 
             | The main limitation isn't the file format. The main
             | limitation is the sensor. On the lower end, it is noise in
             | different forms that overwhelm the very weak signal from
             | dark areas. On the higher end, the sensors get saturated,
             | that is, the semi-conductor bucket for the charges that is
             | released by the photons get full.
             | 
             | And then the experience of the picture is of course limited
             | by the medium that is used to display it. Even the best
             | screens can't show even a small fraction of the contrast
             | that the eye experiences outdoors on a sunny day. And don't
             | mention printed media.
        
       ___________________________________________________________________
       (page generated 2022-03-13 23:00 UTC)