What is HDR, anyway?
lux.camera806 points by _kush a year ago
806 points by _kush a year ago
I did my PhD in Atomic, Molecular, and Optical (AMO) physics, and despite "optical" being part of that I realized midway that I didn't know enough about how regular cameras worked!
It didn't take very long to learn, and it turned out to be extremely important in the work I did during the early days at Waymo and later at Motional.
I wanted to pass along this fun video from several years ago that discusses HDR: https://www.youtube.com/watch?v=bkQJdaGGVM8 . It's short and fun, I recommend it to all HN readers.
Separately, if you want a more serious introduction to digital photography, I recommend the lectures by Marc Levoy from his Stanford course: https://www.youtube.com/watch?v=y7HrM-fk_Rc&list=PL8ungNrvUY... . I believe he runs his own group at Adobe now after leading a successful effort at Google making their pixel cameras the best in the industry for a couple of years. (And then everyone more-or-less caught up, just like with most tech improvements in the history of smartphones).
Try capturing fire with a non-Sony phone and a Sony phone. At least Samsung doesn't color correct blackbodies right and the flame looks nothing like reality.
Pixel camera hardware or software? Isnt there only one vendor for sensors - Sony?
Samsung also makes sensors for phones. IIRC some Pixels use their sensors.
I think Canon makes at least some of their sensors, and Nikon designs theirs and makes it at a third party I forget the name of that isn't Sony or Samsung but they still do use Sony stuff in a lot of their cameras.
I don't know about Pentax, Panasonic or OMD (formerly Olympus)
I think folks here have some idea how expensive chip fabs are. That's why only Canon is able to make their own sensors.
Sony makes sensors for pretty much everyone else. But it's well known that other folks e.g. Nikon have been able to get better signal-to-noise with Sony-made sensors than Sony themselves. I think Panasonic used to make their own sensors but with some recent re-org, that got spun out.
It's been widely rumored that Leica uses Sony sensors, but this gets repeatedly denied by people claiming inside information. We know that Leica was getting 24MP CMOS sensors from CMOSIS in the 2012 timeframe, but CMOSIS has since been acquired by OSRAM, and there hasn't been any verifiable information since then, whether confirming or denying a continued business relationship.
He worked mostly on the software side, but of course had important input into what sensors and processors were chosen for the phones.
> Our eyes can see both just fine.
This gets to a gaming rant of mine: Our natural vision can handle these things because our eyes scan sections of the scene with constant adjustment (light-level, focus) while our brain is compositing it together into what feels like a single moment.
However certain effects in games (i.e. "HDR" and Depth of Field) instead reduce the fidelity of the experience. These features limp along only while our gaze is aimed at the exact spot the software expects. If you glance anywhere else around the scene, you instead percieve an unrealistically wrong coloration or blur that frustratingly persists no matter how much you squint. These problems will remain until gaze-tracking support becomes standard.
So ultimately these features reduce the realism of the experience. They make it less like being there and more like you're watching a second-hand movie recorded on flawed video-cameras. This distinction is even clearer if you consider cases where "film grain" is added.
https://www.realtimerendering.com/blog/thought-for-the-day/
It's crazy that post is 15 years old. Like the OP and this post get at, HDR isn't really a good description of what's happening. HDR often means one or more of at least 3 different things (capture, storage, and presentation). It's just the sticker slapped on advertising.
Things like lens flares, motion blur, film grain, and shallow depth of field are mimicking cameras and not what being there is like--but from a narrative perspective we experience a lot of these things through tv and film. Its visual shorthand. Like Star Wars or Battlestar Galactica copying WWII dogfight footage even though it's less like what it would be like if you were there. High FPS television can feel cheap while 24fps can feel premium and "filmic."
Often those limitations are in place so the experience is consistent for everyone. Games will have you set brightness and contrast--I had friends that would crank everything up to avoid jump scares and to clearly see objects intended to be hidden in shadows. Another reason for consistent presentation is for unfair advantages in multiplayer.
> Things like lens flares, motion blur, film grain, and shallow depth of field are mimicking cameras and not what being there is like
Ignoring film grain, our vision has all these effects all the same.
Look in front of you and only a single plane will be in focus (and only your fovea produces any sort of legibility). Look towards a bright light and you might get flaring from just your eyes. Stare out the side of a car or train when driving at speed and you'll see motion blur, interrupted only by brief clarity if you intentionally try to follow the motion with your eyes.
Without depth of field simulation, the whole scene is just a flat plane with completely unrealistic clarity, and because it's comparatively small, too much of it is smack center on your fovea. The problem is that these are simulations that do not track your eyes, and make the (mostly valid!) assumption that you're looking, nearby or in front of whatever you're controlling.
Maybe motion blur becomes unneccessary given a high enough resolution and refresh rate, but depth of field either requires actual depth or foveal tracking (which only works for one person). Tasteful application of current techniques is probably better.
> High FPS television can feel cheap while 24fps can feel premium and "filmic."
Ugh. I will never understand the obsession this effect. There is no such thing as a "soap opera effect" as people liek to call it, only a slideshow effect.
The history behind this is purely a series of cost-cutting measures entirely unrelated to the user experience or artistic qualities. 24 fps came to be because audio was slapped onto the film, and was the slowest speed where the audio track was acceptable intelligible, saving costly film paper - the sole priority of the time. Before that, we used to record content at variable frame rates but play it back at 30-40 fps.
We're clinging on to a cost-cutting measure that was a significant compromise from the time of hand cranked film recording.
</fist-shaking rant>
> Look in front of you and only a single plane will be in focus (and only your fovea produces any sort of legibility). Look towards a bright light and you might get flaring from just your eyes. Stare out the side of a car or train when driving at speed and you'll see motion blur, interrupted only by brief clarity if you intentionally try to follow the motion with your eyes.
The problem is the mismatch between what you’re looking at on the screen and what the in-game camera is looking at. If these were synchronised perfectly it wouldn’t be a problem.
> Ugh. I will never understand the obsession this effect.
All of it (lens flares, motion blur, film grain, DoF, tone mapping, and exposure, frame rate) are artistic choices constrained by the equipment we have to collect and present it. I think they'll always follow trends. In my entire career following film, photography, computer graphics, and game dev the only time I've heard anyone talk about how we experience any of those things is when people say humans see roughly equivalent of a 50mm lens (on 35mm film).
Just look at the trend of frame size. Film was roughly 4:3, television copied it. Film started matting/cropping the frame. It got crazy with super wide-screen to where some films used 3 projectors side-by-side and most settled on 16:9. Then television copied it. Widescreen is still seen as more "filmic." I remember being surprised working on a feature that switched to Cinemascope's aspect ratio and seeing that was only 850 pixels tall--a full frame would be about twice that.
To me, high frame rate was always just another style. My only beef was with motion-smoothing muddying up footage shot at different frame rates.
The problem is that it just doesn't work on modern, fast displays. Without motion smoothing on a fast and bright screen, 24fps/30fps goes from "choppy" to "seizure inducing and unwatchable". Older sets would just naturally smooth things out.
Even on my LCD TV, smooth motion like credits at certain speeds are extremely uncomfortable to look at at these frame rates.
I consider it borderline irresponsible to continue using these framerates, forcing users into frame interpolation and horrible artifacts, a decision the manufacturer might even have made for them. 120 Hz is finally becoming the norm for regular content (with monitors going to 500+ nowadays), we should at least be able to get to 60 Hz as the lower bound for regular content delivery.
Going further down for artistic value, e.g. for stop motion or actual slide shows is less of a problem in my opinion. It is not as disturbing, and if regular content was appropriately paced there would be no need for interpolation to mess with it...
> Just look at the trend of frame size.
Frame size is different from the other parameters, as it is solely a physical practicality. Bigger is better in all directions, but a cinema screen needs to fit in the building - making a building much taller is less economical than making it wider, and making it whatever it isn't right now adds novelty.
The content needs to be made for the screen with the appropriate balance of periphery and subject to not be completely wrong, so screen technology and recording technology tends to align. Economy of scale causes standardization on lenses and image circles, and the choice of aspect ratio within that circle on the film, forming a feedback loop that enforces the parameters for almost all content.
If some technology somewhere else in the stack causes a change, some will follow for the novelty but others will simply follow the reducing cost, and soon all content aligns on the format, and the majority of home TV sets will be shaped to fit the majority content it can receive.
> The problem is that it just doesn't work on modern, fast displays.
I'm very confused by this. From what I've seen it's been getting a lot better (since transitioning from CRTs). At least for television, frame-rate matching is becoming more of a thing. Higher frame rates really help. Calling everything fps for simplicity; 120 divides evenly by 24, 30, and 60. Lower values wont match and cause issues.
Similarly, (maybe back in the 90s?) projectors in theaters would double-expose each frame to reduce the flicker in between frames. With digital, they no longer have to advance the film between frames.
> smooth motion like credits at certain speeds are extremely uncomfortable to look at at these frame rates.
I think scrolling credits are the most difficult use case: white on black with hard text and no blur. DLP projectors (common 10+ years ago) drive me nuts displaying R G and B separately.
Outside of credits, cinematographers and other filmmakers do think about these things. I remember hearing a cinematographer talk about working on space documentaries in Imax. If you panned too quickly, the white spaceship over a black star field could jump multiple feet each frame. Sure films shot today are optimized for the theater, but the technology gap between theater and home is nowhere near as crazy as CRT vs acetate.
> Frame size is different from the other parameters, as it is solely a physical practicality.
I'm still struggling to see how its that different. Widescreen meant a lower effective resolution (it didn't have to--it started with Cinerama and Cinemascope), but was adopted for cost and aesthetic reasons.
> If some technology somewhere else in the stack causes a change…and soon all content aligns on the format, and the majority of home TV sets will be shaped to fit the majority content it can receive.
And the industry and audiences are really attached to 24fps. Like you say, home televisions adopted film's aspect ratio and I've also seen them adopt much better support for 24fps.
As kind of an aside, I wonder if the motion blur is what people are attached to more than the actual frame rate. I assume you're talking about frame rates higher than 30? Sure, we have faster films and brighter lights, but exposure time is really short. I saw the Hobbit in theaters in both high frame rate and 24fps and the 24fps one looked weird to me, too--I meant to look it up, but I assume they just dropped frames making the blur odd.
> the poster found it via StumbleUpon.
Such a blast from the past, I used to spend so much time just clicking that button!
I'm with you on depth of field, but I don't understand why you think HDR reduces the fidelity of a game.
If you have a good display (eg an OLED) then the brights are brighter and simultaneously there is more detail in the blacks. Why do you think that is worse than SDR?
Check out this old post: https://www.realtimerendering.com/blog/thought-for-the-day/
HDR in games would frequently mean clipping highlights and adding bloom. Prior the "HDR" exposure looked rather flat.
That's not what it means since 2016 or so when consumer TVs got support for properly displaying brighter whites and colors.
It definitely adds detail now, and for the last 8-9 years.
Though consumer TVs obviously still fall short of being as bright at peak as the real world. (We'll probably never want our TV to burn out our vision like the sun, though, but probably hitting highs at least in the 1-2000nit range vs the 500-700 that a lot peak at right now would be nice for most uses.
OK, so it doesn't mean real HDR but simulated HDR.
Maybe when proper HDR support becomes mainstream in 3D engines, that problem will go away.
Right. Just like the article, HDR is too vague to mean anything specific and a label that's slapped onto products. In gaming, it often meant they were finally simulating light and exposure separately--clipping highlights that would have previously been shown. In their opinion, reducing the fidelity. Same with depth of field blurring things that used to not have blur.
It's HDR at the world data level, but SDR at the rendering level. It's simulating the way film cannot handle real-life high dynamic range and clips it instead of compressing it like "HDR" in photography.
> Instead of compressing it like "HDR" in photography
That's not HDR either, that's tone mapping to SDR. The entire point of HDR is that you don't need to compress it because your display can actually make use of the extra bits of information. Most modern phones take true HDR pictures that look great on an HDR display.
That's revisionist nomenclature. HDR photography has meant tone mapping long before HDR displays existed.
The “HDR” here is in the sense of “tone mapping to SDR”. Should also be said that even “H” DR displays only have a stop or two of more range, still much less than in a real-world high-contrast scenes
It's still better though.
HDR displays are >1000nits while SDR caps out at less than 500nits even on the best displays.
Eg for the Samsung s90c, HDR is 1022nits, SDR is 487nits: https://www.rtings.com/tv/reviews/samsung/s90c-oled#test_608 https://www.rtings.com/tv/reviews/samsung/s90c-oled#test_4
Double the range is undeniably still better.
And also 10bit instead of 8bit, so less posterization as well.
Just because the implementations have been subpar until now doesn't mean it's worthless tech to pursue.
> HDR displays are >1000nits
Displays as low as 400nits have been marketed as "HDR".
But nits are only part of the story. What really matters at the end is the range between the darkest and brightest color the display can show under the lighting conditions you want to use it as. 400 nits in a darkened room where blacks are actually black can have much more actual range than 1000nits with very bright "blacks" due to shitty display tech or excessive external illumination.
The most egregious example is 3D. Only one thing is in focus, even though the scene is stereoscopic. It makes no sense visually.
Hell yeah, this one of many issues I had with the first Avatar movie. The movie was so filled with cool things to look at but none of it was in focus. 10 minutes in I had had enough and was ready for a more traditional movie experience. Impressive yes, for 10 minutes, then exhausting.
this thread is helping me understand why I always thought 3D movies looked _less_ 3D than 2D movies.
That and after seeing Avatar 1 in 3D, then seeing Avatar 2 in 3D over 10 years later and not really noticing any improvement in the 3D made me declare 3D movies officially dead (though I haven’t done side by side comparisons)
I had a similar complaint with the few 3D things I watched when that has been hyped in the past (e.g., when Avatar came out in cinemas, and when 3D home TVs seemed to briefly become a thing 15 years ago). It felt like Hollywood was giving me the freedom to immerse myself, but then simultaneously trying to constrain that freedom and force me to look at specific things in specific ways. I don't know what the specific solution is, but it struck me that we needed to be adopting lessons from live stage productions more than cinema if you really want people to think what they're seeing is real.
Stereo film has its own limitations. Sadly, shooting for stereo was expensive and often corners were cut just to get it to show up in a theater where they can charge a premium for a stereo screening. Home video was always a nightmare--nobody wants to wear glasses (glassesless stereo TVs had a very narrow viewing angle).
It may not be obvious, but film has a visual language. If you look at early film, it wasn't obvious if you cut to something that the audience would understand what was going on. Panning from one object to another implies a connection. It's built on the visual language of still photography (things like rule of thirds, using contrast or color to direct your eye, etc). All directing your eye.
Stereo film has its own limitations that were still being explored. In a regular film, you would do a rack focus to connect something in the foreground to the background. In stereo, when there's a rack focus people don't follow the camera the same way. In regular film, you could show someone's back in the foreground of a shot and cut them off at the waist. In stereo, that looks weird.
When you're presenting something you're always directing where someone is looking--whether its a play, movie, or stereo show. The tools are just adapted for the medium.
I do think it worked way better for movies like Avatar or How to Train Your Dragon and was less impressive for things like rom coms.
HDR, not "HDR", is the biggest leap in gaming visuals made in the last 10 years, I think.
Sure, you need a good HDR-capable display and a native HDR-game (or RTX HDR), but the results are pretty awesome.
These effects are for the artistic intent of the game. Same goes for movies, and has nothing to do with "second hand movies recorded on flawed cameras". or with "realism" in the sense of how we perceive the world.
This is why I always turn off these settings immediately when I turn on any video game for the first time. I could never put my finger on why I didn’t like it, but the camera analogy is perfect
It seems like a mistake to lump HDR capture, HDR formats and HDR display together, these are very different things. The claim that Ansel Adams used HDR is super likely to cause confusion, and isn’t particularly accurate.
We’ve had HDR formats and HDR capture and edit workflows since long before HDR displays. The big benefit of HDR capture & formats is that your “negative” doesn’t clip super bright colors and doesn’t lose color resolution in super dark color. As a photographer, with HDR you can re-expose the image when you display/print it, where previously that wasn’t possible. Previously when you took a photo, if you over-exposed it or under-exposed it, you were stuck with what you got. Capturing HDR gives the photographer one degree of extra freedom, allowing them to adjust exposure after the fact. Ansel Adams wasn’t using HDR in the same sense we’re talking about, he was just really good at capturing the right exposure for his medium without needing to adjust it later. There is a very valid argument to be made for doing the work up-front to capture what you’re after, but ignoring that for a moment, it is simply not possible to re-expose Adams’ negatives to reveal color detail he didn’t capture. That’s why he’s not using HDR, and why saying he is will only further muddy the water.
Arguably, even considering HDR a distinct thing is itself weird an inaccurate.
All mediums have a range, and they've never all matched. Sometimes we've tried to calibrate things to match, but anyone watching SDR content for the past many years probably didn't do so on a color-calibrated and brightness calibrated screen - that wouldn't allow you to have a brightness slider.
HDR on monitors is about communicating content brightness and monitor capabilities, but then you have the question of whether to clip the highlights or just map the range when the content is mastered for 4000 nits but your monitor manages 1000-1500 and only in a small window.
This! Yes I think you’re absolutely right. The term “HDR” is in part kind of an artifact of how digital image formats evolved, and it kind of only makes sense relative to a time when the most popular image formats and most common displays were not very sophisticated about colors.
That said, there is one important part that is often lost. One of the ideas behind HDR, sometimes, is to capture absolute values in physical units, rather than relative brightness. This is the distinguishing factor that film and paper and TVs don’t have. Some new displays are getting absolute brightness features, but historically most media display relative color values.
Absolute is also a funny size. From the perspective of human visual perception, an absolute brightness only matters if the entire viewing environment is also controlled to the same absolute values. Visual perception is highly contextual, and we are not only seeing the screen.
It's not fun being unable to watch dark scenes during the day or evening in a living room, nor is vaporizing your retinas if the ambient environment went dark in the meantime. People want good viewing experience in the available environment that is logically similar to what the content intended, but that is not always the same as reproducing the exact same photons as the directors's mastering monitor sent towards their their eyeballs at the time of production.
Indeed. For a movie scene depicting the sky including the Sun, you probably wouldn't want your TV to achieve the same brightness as the Sun. You might want your TV to become significantly brighter than the rest of the scenes, to achieve an effect something like the Sun catching your eye.
Of course, the same thing goes for audio in movies. You probably want a gunshot or explosion to sound loud and even be slightly shocking, but you probably don't want it to be as loud as a real gunshot or explosion would be from the depicted distance.
The difference is that for 3+ decades the dynamic range of ubiquitous audio formats (like 16 bit PCM in audio CDs and DVDs) has provided far more dynamic range than is comfortably usable in normal listening environments. So we're very familiar with audio being mastered with a much smaller dynamic range than the medium supports.
Yep, absolutely! ;)
This brings up a bunch of good points, and it tracks with what I was trying to say about conflating HDR processing with HDR display. But do keep in mind that even when you have absolute value images, that doesn’t imply anything about how you display them. You can experience large benefits with an HDR workflow, even when your output or display is low dynamic range. Assume that there will be some tone mapping process happening and that the way you map tones depends on the display medium and its capabilities, and on the context and environment of the display. Using the term “HDR” shouldn’t imply any mismatch or disconnect in the viewing environment. It only did so in the article because it wasn’t very careful about its terms and definitions.
The term "HDR" arguably makes more sense for the effect achieved by tone mapping multiple exposures of the same subject onto a "normal" (e.g. SRGB) display. In this case, the "high" in "HDR" just means "from a source with higher dynamic range than the display."
Remember "wide gamut" screens ?
This is part of 'HDR' standards too...
And it's quite annoying that 'HDR' (and which specific one ?) is treated as just being 'on' or 'off' even for power users...
> but your monitor manages 1000-1500 and only in a small window.
Owning a display that can do 1300+ nits sustained across a 100% window has been the biggest display upgrade I think I have ever had. It's given me a tolerance for LCD, a technology I've hated since the death of CRTs and turned me away from OLED.
There was a time I would have said i'd never own a non OLED display again. But a capable HDR display changed that logic in a big way.
Too bad the motion resolution on it, especially compared to OLED is meh. Again, at one point, motion was the most important aspect to me (its why I still own CRTs) but this level of HDR...transformative for lack of a better word.
Motion resolution? Do you mean the pixel response time?
CRTs technically have quite a few artifacts in this area, but as content displayed CRTs tend to be built for CRTs this is less of an issue, and in many case even required. The input is expecting specific distortions and effects from scanlines and phosphor, which a "perfect" display wouldn't exhibit...
The aggressive OLED ABL is simply a thermal issue. It can be mitigated with thermal design in smaller devices, and anything that increases efficiency (be it micro lens arrays, stacked "tandem" panels, quantum dots, alternative emitter technology) will lower the thermal load and increase the max full panel brightness.
(LCD with zone dimming would also be able to pull this trick to get even brighter zones, but because the base brightness is high enough it doesn't bother.)
> Motion resolution? Do you mean the pixel response time?
I indeed meant motion resolution, which pixel response time only partially affects. It’s about how clearly a display shows motion, unlike static resolution which only reflects realistically a still image. Even with fast pixels, sample and hold displays blur motion unless framerate and refresh rate is high, or BFI/strobing is used. This blur immediately lowers perceived resolution the moment anything moves on screen.
> The input is expecting specific distortions and effects from scanlines and phosphor, which a "perfect" display wouldn't exhibit...
That's true for many CRT purists, but is not a huge deal for me personally. My focus is motion performance. If LCD/OLED matched CRT motion at the same refresh rate, I’d drop CRT in a heartbeat, slap on a CRT shader, and call it a day. Heresy to many CRT enthusiasts.
Ironically, this is an area in which I feel we are getting CLOSE enough with the new higher refresh OLEDs for non HDR retro content in combination with: https://blurbusters.com/crt-simulation-in-a-gpu-shader-looks... (which hopefully will continue to be improved.)
> The aggressive OLED ABL is simply a thermal issue.
Theoretically, yes and there’s been progress, but it’s still unsolved in practice. If someone shipped an OLED twice as thick and full of fans and heatsinks, I’d buy it tomorrow. But that’s not what the market wants, so obviously it's not what they make.
> It can be mitigated with thermal design in smaller devices, and anything that increases efficiency (be it micro lens arrays, stacked "tandem" panels, quantum dots, alternative emitter technology) will lower the thermal load and increase the max full panel brightness.
Sure, in theory. But so far the improvements (like QD-OLED or MLA) haven’t gone far enough. I already own panels using these. Beyond that, much of the tech isn’t in the display types I care about, or isn’t ready yet. Which is a pity, because the tandem based displays I have seen in usage are really decent.
That said, the latest G5 WOLEDs are the first I’d call acceptable for HDR at high APL, for the preferences I hold with very decent real scene brightness, at least in film. Sadly, I doubt we’ll see comparable performance in PC monitors until many years down the track and monitors are my preference.
Hello fellow CRT owner. What is your use case? Retro video games? PC games? Movies?
Hello indeed!
> What is your use case? Retro video games? PC games? Movies?
All of the above! The majority of my interest largely stems from the fact that for whatever reason, I am INCREDIBLY sensitive to sample and hold motion blur. Whilst I tolerate it for modern gaming because I largely have no choice, CRT's mean I do not for my retro gaming, which I very much enjoy. (I was very poor growing up, so most of it for me is not even nostalgia, most of these games are new to me.)
Outside of that, we have a "retro" corner in our home with a 32" trinitron. I collect laserdisc/VHS and we have "retro video" nights where for whatever reason, we watch the worst possible quality copies of movies we could get in significantly higher definition. Much the same as videogames, I was not exposed to a lot of media growing up, my wife has also not seen many things because she was in Russia back then, so there is a ton for us to catch up on very slowly and it just makes for a fun little date night every now and again.
Sadly though, as I get ready to take on a mortgage, it's likely most of my CRT's will be sold, or at least the broadcast monitors. I do not look forward to it haha.
> Outside of that, we have a "retro" corner in our home with a 32" trinitron.
A 32” Trinny. Nice. I have the 32” JVC D-series which I consider my crown jewel. It’s for retro gaming and I have a laserdisc player but a very limited selection of movies. Analog baby.
> Sadly though, as I get ready to take on a mortgage, it's likely most of my CRT's will be sold
Mortgage = space. You won’t believe the nooks and crannies you can fit CRTs into. Attic. Shed. Crawl space. Space under basement stairs. Heck, even the neighbors house. I have no less than 14 CRTs ferreted away in the house. Wife thinks I have only 5. Get creative. Don’t worry about the elements, these puppies were built to survive nuclear blasts. Do I have a sickness? Probably. But analog!!!
Speaking of laser disc it's wild how vivid colors are on that platform. My main example movie is Star Trek First contact and everything is very colorful. DVD is muddy. Even a Blu-ray copy kinda looks like crap. A total side note is the surround sound for that movie is absolutely awesome especially the cube battle scene.
> I have the 32” JVC D-series which
I would love one of these however I have never seen one in my country. Super jealous haha! The tubes they use apparently were an american made tube, with most of the JVCs that were released in my country using different tubes than those released in the US market.
That being said, I do own two JVC "broadcast" monitors that I love. A 17" and a 19". They are no D-series real "TV" but.
Adams adjusted heavily with dodging and burning, even working to invent a new chemical process to provide more control when developing. He was great at determining exposure for his process as well. A key skill was having a vision for what the image would be after adjusting. Adams talked a lot about this as a top priority of his process.
> It's even more incredible that this was done on paper, which has even less dynamic range than computer screens!
I came here to point this out. You have a pretty high dynamic range in the captured medium, and then you can use the tools you have to darken or lighten portions of the photograph when transferring it to paper.
Indeed so. Printing on paper and other substrates is inherently subtractive in nature which limits the gamut of colors and values that can be reproduced. Digital methods make the job of translating additive to subtractive media easier vs. the analog techniques available to film photographers. In any case, the image quality classic photography was able to achieve is truly remarkable.
Notably, the dodging and burning used by photographers aren't obsolete. There's a reason these tools are included in virtually every image-editing program out there. Manipulating dynamic range, particularly in printed images, remains part of the craft of image-making.
> The claim that Ansel Adams used HDR is super likely to cause confusion
That isn't what the article claims. It says:
"Ansel Adams, one of the most revered photographers of the 20th century, was a master at capturing dramatic, high dynamic range scenes."
"Use HDR" (your term) is vague to the point of not meaning much of anything, but the article is clear that Adams was capturing scenes that had a high dynamic range, which is objectively true.
I think about the Ansel Adams zone system
https://www.kimhildebrand.com/how-to-use-the-zone-system/
where my interpretation is colored by the experience of making high quality prints and viewing them under different conditions, particularly poor illumination quality but you could also count "small handheld game console", "halftone screened and printed on newsprint" as other degraded conditions. In those cases you might imagine that the eye can only differentiate between 11 tones so even if an image has finer detail it ought to connect well with people if colors were quantized. (I think about concept art from Pokémon Sun and Moon which looked great printed with a thermal printer because it was designed to look great on a cheap screen.)
In my mind, the ideal image would look good quantized to 11 zones but also has interesting detail in texture in 9 of the zones (extreme white and black don't show texture). That's a bit of an oversimplification (maybe a shot outdoors in the snow is going to trend really bright, maybe for artistic reasons you want things to be really dark, ...) but Ansel Adams manually "tone mapped" his images using dodging, burning and similar techniques to make it so.
Literally the sentence preceding the one you quoted is “What if I told you that analog photographers captured HDR as far back as 1857?”.
And that quote specifically does not "lump HDR capture, HDR formats and HDR display together".
It is directly addressing capture.
Correct. I didn’t say that sentence was the source of the conflation, I said it was the source of the Ansel Adams problem. There are other parts that mix together capture, formats, and display.
Edit: and btw I am objecting to calling film capture “HDR”, I don’t think that helps define HDR nor reflects accurately on the history of the term.
That’s a strange claim because the first digital HDR capture devices were film scanners (for example the Cineon equipment used by the motion picture industry in the 1990s).
Film provided a higher dynamic range than digital sensors, and professionals wanted to capture that for image editing.
Sure, it wasn’t terribly deep HDR by today’s standards. Cineon used 10 bits per channel with the white point at coding value 685 (and a log color space). That’s still a lot more range and superwhite latitude than you got with standard 8-bpc YUV video.
They didn’t call that “HDR” at the time, and it wasn’t based on the idea of recording radiance or other absolute physical units.
I’m certain physicists had high range digital cameras before Cineon, and they were working in absolute physical metrics. That would be a stronger example.
You bring up an important point that is completely lost in the HDR discussion: this is about color resolution at least as much as it’s about range, if not moreso. I can use 10 bits for a [0..1] range just as easily as I can use 4 bits to represent quantized values from 0 to 10^9. Talking about the range of a scene captured is leaving out most of the story, and all of the important parts. We’ve had outdoor photography, high quality films, and the ability to control exposure for a long time, and that doesn’t explain what “HDR” is.
It was called "extended dynamic range" by ILM when they published the OpenEXR spec (2003):
> OpenEXR (www.openexr.net), its previously proprietary extended dynamic range image file format, to the open source community
https://web.archive.org/web/20170721234341/http://www.openex...
And "larger dynamic range" by Rea & Jeffrey (1990):
> With γ = 1 there is equal brightness resolution over the entire unsaturated image at the expense of a larger dynamic range within a given image. Finally, the automatic gain control, AGC, was disabled so that the input/output relation would be constant over the full range of scene luminances.
https://doi.org/10.1080/00994480.1990.10747942
I'm not sure when everyone settled on "high" rather than "large" or "extended", but certainly 'adjective dynamic range' is near-universal.
As I remember it, Paul Debevec had borrowed Greg Ward’s RGBE file format at some point in the late 90s and rebranded it “.hdr” for his image viewer tool (hdrView) and code to convert a stack of LDR exposures into HDR. I can see presentations online from Greg Ward in 2001 that have slides with “HDR” and “HDRI” all over the place. So yeah the term definitely must have started in the late 90s if not earlier. I’m not sure it was as there in the early 90s though.