MacInTouch Amazon link...

photography and color

Channels
Other
Is there a clear way to determine if you have a 10-bit path? Yes, fortunately, and it's pretty easy. Choose a test image that is just a luminosity (B&W - no color) gradation shift from black to white. This is easy to create in Photoshop: just drag out a gradation from the left edge to the right edge. You can also find these easily on many of the site you mentioned, above. Make it a large area, and be sure that it runs horizontally, not top to bottom, then just look at it occupying most of your screen area. If you see vertical bands, like steps, of tone then you have an 8-bit system (although you cannot tell which component is causing it to be 8-bit from this test.)
Or worse, you have an 6-bit monitor. Apple uses IPS LCD displays with at least 24 bits per pixel (8 bits * 3): 16.7 million possible colors. Cheaper TN displays (are they still a thing?) would often use 6 bits for each of the three primary colors, which is only 262,144 possible colors per pixel. Some 6-bit monitors would use dithering techniques and then claim that they support "millions of colors."
 


An interesting innovation in printing has appeared in the past few years: rather than printing on paper, photographers are printing on aluminum. The result is a much more lively print (due, I speculate, to the reflected light from the aluminum). Thoughts?
Well, my thoughts are really an opinion.

In my work, I choose the paper and ink based on the subject matter and my artistic intent. For example, a bight shiny day with splashing water and lots of highlights probably isn't suited for matte paper. I'd likely choose resin-coated (RC); your metallic "paper" might go well there, too. We're looking for "zing!" (There is printing on metal; printing on plexy; printing on canvas; a whole range of options.)

A subdued, dignified CEO portrait would likely go on matte paper, on the other hand. An action portrait of a soccer star might be better on RC.

I'm not saying these are "rules", since it's your art: you choose. I'm only say that the effect is different, and there is no "one size fits all."

(Personally, I usually find metallic paper "over the top" and have only found one of my own images that was improved by using it.)
 


The vast majority of the printed word isn't great art, but that doesn't prevent great art. Most music isn't destined to live for decades, but that doesn't stop great music from being written.
This reminds me of a few things my grandfather used to say. He would say that "90% of everything is garbage, including of the remaining 10%". (He didn't actually use the word "garbage", but this is a family-friendly web site :-)

He also would explain why (in his opinion) classical music is so great: Back when it was composed, a few hundred years ago, there was just as much garbage music as people are composing today. But with the passage of time, most of the garbage has been lost or forgotten, with only the best material surviving to the present day.

I think we can extrapolate that theory to everything else. We see a lot of trash and it's easy to blame it on the technology, but it's not really true. The trash would be created on any technology. And some is good, and some small amount is great. Over the next 50-150 years most of the trash will fall by the wayside and only the great material will be remembered, giving future generations the (false) impression that everything created today was great.
 



Ric Ford

MacInTouch
Here's a collection of ICC (ICM) profiles for experimenting with color spaces/profiles:
Chromasoft said:
ICM Profiles
On this page there is a set of ICC profiles, also knows as ICM profiles. These have been created from the data on Bruce Lindbloom's site, as well as information from Adobe, using the little cms toolkit.

Profiles tell you system how to display colors - they contain three key pieces of information:
  1. An exact definition of what the gamut of the color space is - in simple terms, what exact shade of red the R component is, the G component, etc
  2. A white point - these are often specified as a "D number", one of the CIE standard illuminants e.g., D65 (6500K, overcast daylight) or D55(5500K, warm daylight)
  3. A Gamma curve - the way that we see light is non-linear, and many color systems mimic this
You can use these profiles in a number of ways:
  • If you have a raw developer program, such as Capture One, that directly supports ICC profiles, you can load and use these directly. So, for example, if under Capture One you wanted the screen readouts to be in WideGamut, you would just load WideGamut.icc as the output profile.
  • You can also convert color on the Mac by using the ColorSync utility's calculator; just select the appropriate profile in the calculator screen.
The profiles are in a single ZIP file, ICCProfiles.zip.

The root of the Zip file has the following profiles:

AppleRGB.icc
CIERGB.icc
MelissaRGB.icc
ProPhoto.icc
WideGamut.icc
And a "Delta E Reference Image" is helpful for demonstrating differences between color spaces/profiles:
Bruce Lindbloom said:
RGB Reference Images
In the interest of digital imaging research, I am providing a set of four images that represent "perfect" images, that is, they represent a natural scene (as opposed to say, a test pattern or a gradient) which is completely void of any noise, aliasing or other image artifacts. They were taken with a virtual, six mega-pixel camera using a ray tracing program I wrote myself. ... The scene is titled Delta E and represents an imaginary view of my imaginary desktop (I am a color scientist).
 


... And a "Delta E Reference Image" is helpful for demonstrating differences between color spaces/profiles:
Re: the Bruce Lindbloom page (click "info" button), the most interesting bit to me was at the very bottom of the page. I have never seen such a clear, and correct, example of metamerism. While most digital printers think of it as "a defect" that becomes visible when you look at a digital print from an acute angle (and which is generally fixed with modern inks and papers) - in fact, metamerism allows us to actually see a full range of colors. Wikipedia has good explanations, but you'd want to be ready for a deep dive.

With a basic knowledge of the principles, however, you can understand what calibration is; what a profile is; what the PCS (Profile Connection Space) is; and why you need two profiles to print a digital image from a computer; and why some people create their own printer profiles.* It actually all does make sense, and quite impressively, considering the difficulty of the situation 25 years ago, when Apple's Colorsync was the beginning of the revolution that lead to the quality of printing we all enjoy today.

(*You do not need to know any of this to just print a snapshot. Hit the "print" button, and it will "just work." The above is how it works. The whole process, from human physiology thru Bayer patterns and on to digital printing just interests me. As usual, your milage may vary.)
 


Ric Ford

MacInTouch
Information about color and Capture One:
Phase One said:
Colors in Capture One

Discover how Capture One deals with image color, how to set a permanent color space, and calibrate an Eizo ColorEdge CG monitor.

Essential information regarding colors in Capture One:
  • Capture One deals with colors in two ways: internally and for output.
  • Capture One works in a very large color space, similar to that captured by camera sensors. A large color space ensures that little clipping of the color data can occur. Clipping is the loss of image information in a region of an image. Clipping appears when one or more color values are larger than the histogram (color space of the output file).
  • At the end of the workflow, the RAW data has to be processed to pixel based image files, in defined color spaces. These spaces are smaller than the internal color space used by Capture One. When processing, some color data will be discarded. This is why it is paramount to perform color corrections and optimizations to images before processing to a smaller color space.
  • Capture One provides accurate color by reading the camera-generated RAW information, file header and settings file.
  • A RAW file is assigned a color profile once Capture One has established which camera model has been used. The RAW data is then translated to the internal working color space of Capture One and it is here that edits can be applied.
  • Image data is converted, by means of ICC profiles, to industry standard spaces such as Adobe RGB or sRGB during the processing stage.

Color Output Settings
Capture One Express for Sony can output to any RGB color space while Capture One Pro can also output CMYK. (It is necessary that the ICC profile is available on the local machine).

For Web
Images that are intended to be published on web sites should always be processed into the sRGB color space as few web-browsers are capable of color management and the subtleties of images will not only be lost but can also be incorrectly displayed. Images processed in larger color spaces like AdobeRGB will be displayed with less color (especially green), and are often slightly too dark when shown in browsers that only support sRGB.

For Print
Images for print should be output to suit the requirements of the client or lab. Adobe RGB is a large color space that is capable of expressing a wider gamut of colors than sRGB. Adobe RGB is, therefore, the preferred choice for images that are likely to receive extensive processing or retouching.

Camera Profiling
Embedding the ICC color profile into the processed file (ICC Profile > Embed Camera profile) ensures that no color changes are made to the image data, which is particularly important for creating camera profiles.

Retouching/Manipulation
Image files that are intended to receive intensive retouching and manipulation can benefit by being processed and output in 16-bit to ProPhoto RGB, which is an even larger color space than Adobe RGB.

CMYK Color Spaces
Capture One Pro provides a selection of the most common CMYK color spaces. The photographer can convert to CMYK during processing to ensure image quality, instead of applying this color space conversion in postproduction. CMYK can be selected from the Output Tool Tab.
 


Ric Ford

MacInTouch
Here's some advice about soft-proofing and Capture One (which apparently lacks the gamut warning feature of Photoshop) and a nice, visual demonstration (in the video) of color spaces and gamut:
Martin Bailey said:
Soft Proofing for Print in Capture One Pro and Photoshop (Podcast 575)
This week I’ve created a video to explain soft-proofing for print in Capture One Pro and Photoshop... I start the video by showing you the difference between a number of key color spaces too, to hopefully make it obvious that we really don’t want to cram our beautiful images, with many more colors, into these smaller color spaces.
...
When soft-proofing in Capture One Pro, here are the things you need to check. I don’t go through all of these in the video, so I thought I’d list them here for your reference....
 


Ric Ford

MacInTouch
I just realized something that may (or may not) be helpful (vs. obvious) to other folks who started in traditional silver/"wet" process photography:

A camera raw file is like a film negative (or slide) with "gamut" (dynamic range) and gamma (contrast) [that must then be] defined by camera firmware and raw file software (including OS, profiles, app, drivers) instead of development time, temperature and chemistry.

Viewing a raw file on a display is like projecting a negative or slide onto something. Software (OS, app, profiles, driver) and the display (hardware and internal firmware) will affect how the raw file looks, just like a projector/enlarger and its bulb and any filters, plus what you're projecting on, affects how a slide/negative looks.

Printing a raw file is like printing a negative, where software (OS, app, driver, profiles) and printer hardware (and firmware), instead of development time, temperature and chemistry, determine "gamut" (dynamic range) and gamma (contrast).

And, here's the kicker: "wet" chemical photographic processes produce prints that have a smaller dynamic range (i.e. "gamut") than negatives/slides - just as we see with digital prints vs. digital raw files!
 


I just realized something that may (or may not) be helpful (vs. obvious) to other folks who started in traditional silver/"wet" process photography:
A camera raw file is like a film negative (or slide) with "gamut" (dynamic range) and gamma (contrast) defined by camera firmware and raw file software (including OS, profiles, app, drivers) instead of development time, temperature and chemistry....
A raw file is not an image, and strictly speaking, you cannot view a raw file at all. A raw file is just a bunch of numbers, representing the amount of light striking a photosensor. What turns all that raw data (hence the name) into an image is software that gives it an RGB value. (It's called "demosaicing.") There is no color information in the raw data, and only a recording of luminosity. Inside your camera, software converts that to an image on your camera's monitor, or for you to download. Each manufacturer's format for that raw data is different, as is each manufacturer's demosaicing software.

Remember the old "Weston meter", which had a "bee-hive" glass section on top, and when you pressed the button on the side, the needle on the front would swing, powered only by the light coming in thru the beehive? That's how your camera sensor works. Instead of pushing a delicate needle, the amount of light is measured and recorded as a number.

And that is obviously not an image!

When you download the raw data, you are substituting your own software for the camera's software. The advantage is that you have lots of different demosaicing packages to choose from, and further, you can control how it works (unlike the camera, where it's basically fixed, and WYSIWYG.)

The disadvantage is that you must do the work yourself.

Fun fact: most raw files include a section that actually is a small JPEG. That's because you simply cannot see a raw file directly (unless you like looking at lists of numbers), and the camera has to make a quick and dirty jpg just to show you your shot on its built-in monitor.

So your analogy is basically correct, but technically wrong. There is no contrast nor gamut to a raw file itself. (There is a limitation as to the range of the numbers (representing only luminosity - a.k.a. brightness) which varies between camera sensors. Cheap cameras will use 8-bit, better will use 12-bit (the most common, I believe). My Nikon D800 uses 14-bit (but can be set to 12-bit).

If you are processing raw files yourself, you want the largest range of raw numbers you can get.

There is a lot more to it than this, and it gets into the weeds quickly, such as how that range is apportioned across the range of light. (Raw bit depth is about dynamic range, not the number of colors you get to capture.)

So, yes: in a sense the raw data is like a negative, but more realistically it's like exposed, but undeveloped film. With your raw- processing software, you develop the image.

Better to get these tidbits clear, lest an oversimplification lead to later confusion, in my opinion.
 


Ric Ford

MacInTouch
... So, yes: in a sense the raw data is like a negative, but more realistically it's like exposed, but undeveloped film. With your raw- processing software, you develop the image....
Thanks for making that clearer, Tracy. I actually understood that the raw file is equivalent to what we knew as the "latent image" in the old days, and that it has to be "developed" to manifest a visible image, but I didn't express it well. (And I appreciate your covering more details, and... I do remember the essential Weston light meter - great analogy! :-)

Meanwhile, as I understand it, the setting on a camera for "Adobe RGB" vs. "sRGB" applies only to JPEGs produced in-camera and has no effect on raw files (which, as the video noted above explains, have their own, larger color spaces).
 


As I understand it, the setting on a camera for "Adobe RGB" vs. "sRGB" applies only to JPEGs produced in-camera and has no effect on raw files (which, as the video noted above explains, have their own, larger color spaces).
Each camera sensor (and I do mean "each") has its own "color space", using the term generally. Each photosite is covered by either a red, green, or blue filter (so the color you see comes from measuring the luminosity of that single filtered light). The question is "What red?" "How green?" and "Which blue?" Together, these determine the "color space" of the sensor, which is to say exactly what numbers are pulled off with the A/D converter.

The traditional "color space" (sRGB; Adobe RGB etc) is determined by you, using the software used to convert the raw data into an image.

So, you are correct - raw data doesn't have a color space in the sense of sRGB et al. And the numbers you get from a raw data file are still unmassaged, a.k.a. "raw."

When you pull the raw data in from the camera card, it usually (varies by manufacturer) contains miscellaneous meta-data (read: data about the rest of the data), such as the camera setting; the color space selected by the user; perhaps the small image used on your camera's monitor, and so on. But, after that meta-data, the actual sensor data is completely unmodified.

As mentioned, where that meta-data is often used is as a default setting for a prelimanary display in your processing software, such as Lightroom, DxO et al. A raw image without a bump for the human eye is an extremely flat, dim thing (linear gamma) and, as such, would make finding a decent starting point for your own adjustment all that much more difficult.

You can (again, sometimes...) see this, if you photograph the same image twice: once "as usual" and again using a different white-balance or one of the camera's built in "gimmick" settings. Download both those raw files, and you will likely see them in Lightroom (etc.) exhibiting those changes by default.

This confuses people, because they assume the data has changed. It has not! The software is simply applying the meta-data conditions to the [same raw] data.

By George! I think you've got it! :-)
 



One point about raw image files - it is a common misconception that these files literally contain the raw bits coming off of a sensor. While I suppose there may be some cameras that do this, there is usually some (minimal) amount of processing taking place.

For example, a light sensor chip typically returns arbitrary 8- or 16-bit numbers corresponding to the amount of light hitting the element. This data usually needs to be calibrated (often using a calibration table provided by the chip itself) to convert the number into some real-world unit (e.g. lux). Since the calibration data can vary from chip to chip, it is likely that a camera will perform that conversion and store the real-world-unit-data in the raw file.

Additionally, some sensor systems involve taking many readings of each element over time. So each sensor element doesn't have one value but has a stream of values (which may be very large, depending on your exposure time). Although there may be some cameras that store all of these per-element streams in the raw file, I would expect most to perform some operation on the stream to produce a fixed-size amount of per-element data (maybe a structure of count, sum, average, standard deviation, etc.) in order to keep long exposures from overflowing the camera's memory (and from filling the storage device).

The camera may also run some noise-filtering algorithms on the sensor data to compensate for the fact that all sensors have a certain amount of noise (that can vary from picture to picture depending on a wide variety of factors) - returning the filtered data. And some (I suspect) will not return this, but will include the unfiltered data along with a noise profile (which your computer will have to apply during processing).

I suppose the big takeaway is that the content of raw files can vary tremendously from camera to camera, and it is very rarely (if ever) going to be a completely unprocessed stream of sensor data. The amount of processing is minimal (especially compared to an image format like JPEG), but it is not going to be zero either.
 


... I suppose the big takeaway is that the content of raw files can vary tremendously from camera to camera, and it is very rarely (if ever) going to be a completely unprocessed stream of sensor data. The amount of processing is minimal (especially compared to an image format like JPEG), but it is not going to be zero either.
Thanks to David C for that. I didn't mention the on-chip processing, because it's still the data coming off the chip, per se, and I didn't want to get down in the weeds too much. The A/D process may vary between CCDs, as well.

My point was that a raw file is the data that is given to a post-processing engine, either in the camera, or on one's computer. I admit, as David pointed out, that I sacrificed some modest technical accuracy in order to present a less-confusing overview.

I was, however, completely unaware that any camera used a "streaming" process, so I learned something! (Wouldn't that cause the sensor to heat up, and introduce noise as well?) Which cameras (or sensors) use this technique?
 


I need to scan (with a MacBook Air 2011 running High Sierra) several hundred photos and slides this fall. They are old family photos, and the quality isn’t superb to begin with. Slides are Kodachrome. I would like advice about choosing a scanner. The results may never make it off a computer monitor onto paper. I am sure that a scanner costing under $200 is more than adequate. I tend toward the Epson V600.

Is a dedicated film scanner for the slides enough better than a flatbed scanner to be worth the extra cost of buying a second scanner? If the flatbed doesn’t need to do slides, I could spend less on it.
 


I was, however, completely unaware that any camera used a "streaming" process, so I learned something! (Wouldn't that cause the sensor to heat up, and introduce noise as well?) Which cameras (or sensors) use this technique?
I don't know how popular it is, but it's my understanding that this is how Apple implements HDR on an iPhone. They reset the sensor and start reading values, which grow over time based on the intensity of the light. They "snap" the values at three different times without resetting the sensors in between, producing three different exposures, which they then combine with software.

This is compared with the more "traditional" method, where you snap three (or more) independent pictures. The advantage to the traditional method is that each picture is snapped the same as a standalone picture and probably doesn't require any changes to how you read the sensor. The disadvantage is that your subject needs to remain still for a longer period of time. (e.g. if your three exposures are for 1, 2 and 4s, the Apple technique would capture all three images in 4 seconds, whereas a separate-picture technique would require 7 seconds).

Of course, it may be that only Apple thought this was a good idea. :-)
 


Ric Ford

MacInTouch
I need to scan (with a MacBook Air 2011 running High Sierra) several hundred photos and slides this fall. They are old family photos, and the quality isn’t superb to begin with. Slides are Kodachrome. I would like advice about choosing a scanner. The results may never make it off a computer monitor onto paper. I am sure that a scanner costing under $200 is more than adequate. I tend toward the Epson V600. Is a dedicated film scanner for the slides enough better than a flatbed scanner to be worth the extra cost of buying a second scanner? If the flatbed doesn’t need to do slides, I could spend less on it.
I'm sure others will have more advice, but here are a few considerations off the top of my head:
  • My understanding is that you can do very good "scans" by photographing film with a good DSLR (perhaps using a macro lens for best results) and light source. (I think we've had past notes about that on macintouch.com.)
  • QromaScan is a neat system for scanning photos with an iPhone.
  • You can probably find services to do the scans for you. (I know of one local photo store that does that.)
 


I need to scan (with a MacBook Air 2011 running High Sierra) several hundred photos and slides this fall. They are old family photos, and the quality isn’t superb to begin with. Slides are Kodachrome. I would like advice about choosing a scanner. The results may never make it off a computer monitor onto paper. I am sure that a scanner costing under $200 is more than adequate. I tend toward the Epson V600.

Is a dedicated film scanner for the slides enough better than a flatbed scanner to be worth the extra cost of buying a second scanner? If the flatbed doesn’t need to do slides, I could spend less on it.
A dedicated slide scanner will produce better results than a flatbed scanner. If you can find a used one on eBay, you might get a good price.

Oh, never mind. I just looked up the Nikon Coolscan 5000, which is the model I have. It was $1000 new. Used models on eBay are going for $700 and up. But that's probably because it's still a great slide scanner... and one of the best for trying to scan Kodachrome.

I've only got a few thousand more slides to go....
 


Ric Ford

MacInTouch
Looking for more information about raw file formats, I discovered this Wikipedia article, which echoes and expands on some of the things Tracy has been explaining:
Wikipedia said:
Raw image format

... Raw files contain the information required to produce a viewable image from the camera's sensor data. The structure of raw files often follows a common pattern:
  • A short file header which typically contains an indicator of the byte-ordering of the file, a file identifier and an offset into the main file data
  • Camera sensor metadata which is required to interpret the sensor image data, including the size of the sensor, the attributes of the CFA and its color profile
  • Image metadata which is required for inclusion in any CMS environment or database. These include the exposure settings, camera/scanner/lens model, date (and, optionally, place) of shoot/scan, authoring information and other. Some raw files contain a standardized metadata section with data in Exif format.
  • An image thumbnail
  • Most raw files contain a full size JPEG conversion of the image, which is used to preview the file on the camera's LCD panel.
  • In the case of motion picture film scans, either the timecode, keycode or frame number in the file sequence which represents the frame sequence in a scanned reel. This item allows the file to be ordered in a frame sequence (without relying on its filename).
  • The sensor image data
A related article was very helpful in explaining how color profiles map color from one color space to another, critically to my understanding, via an intermediate "profile connection space."
Wikipedia said:
ICC profile

... suppose we have a particular RGB and CMYK color space, and want to convert from this RGB to that CMYK. The first step is to obtain the two ICC profiles concerned. To perform the conversion, each RGB triplet is first converted to the Profile connection space (PCS) using the RGB profile. If necessary the PCS is converted between CIELAB and CIEXYZ, a well defined transformation. Then the PCS is converted to the four values of C,M,Y,K required using the second profile.

So a profile is essentially a mapping from a color space to the PCS, and from the PCS to the color space. The profile might do this using tables of color values to be interpolated (separate tables will be needed for the conversion in each direction), or using a series of mathematical formulae.

A profile might define several mappings, according to rendering intent. These mappings allow a choice between closest possible color matching, and remapping the entire color range to allow for different gamuts.
 


Ric Ford

MacInTouch
Apple's brilliant designers have made it all but impossible to utilize the company's display calibration app effectively, but it's possible if you know the secret incantation:

You want to run

/System/Library/ColorSync/Calibrators/Display Calibrator.app

which is also available via

System Preferences > Displays > Color > Calibrate [button]

But... it's effectively dysfunctional unless you know to hold down the Option key while starting it, which enables a tiny checkbox for the secret "Expert Mode" and lets you actually access a clever calibration procedure to create a useful display profile that you can name and save.
 


Is a dedicated film scanner for the slides enough better than a flatbed scanner to be worth the extra cost of buying a second scanner? If the flatbed doesn’t need to do slides, I could spend less on it.
I wouldn't get a dedicated slide scanner, because when you're finished with your "limited" number of slides, you end up with an expensive door stop.

I know the scanners have changed over the years, but when I got my dedicated slide scanner, it was so very slow and would only do one at a time.

The new flatbed scanners with transparency capability can scan multiple slides at a time, and most have software that can save each slide as a separate image file, thus eliminating having to duplicate, crop and save multiple times to get each slide as an individual.

I got a Canon 8600F, which was an excellent photo and transparency scanner. It had software that would scan and separate multiple slides at a time (or prints).

Of course, this is coming from an amateur's/hobbyist's point of view.
 


I'm sure others will have more advice, but here are a few considerations off the top of my QromaScan is a neat system for scanning photos with an iPhone.
I bought the Qromascan when it was still fairly new in the marketplace. It is quite a remarkable achievement – it did not eliminate the need for a flatbed scanner in my case, however, due to its 6"x8" lightbox size limitation. I gave it to a friend who didn't need anything larger, and she loves it.

I also bought the slide scan box, which works incredibly well, and the price is right.
... Oops, it looks like the slide box isn't available anymore. Maybe it was too difficult to keep manufacturing it to fit new iPhones, although my iPhone XS fits.
 


Looking for more information about raw file formats, I discovered this Wikipedia article, which echoes and expands on some of the things Tracy has been explaining:
It's actually way more complicated than that.

First of all, your image sensor is not a digital capture device, it's analog. It sees and converts photons into a storage charge. That storage charge is "counted" by something. But in that counting and conversion to digital process, there are a bunch of complications, including something Nikon calls pre-conditioning.

In other words, if your sensor "captured" 100 photons, the digital value is almost certainly not 100. We refer to the results of the ADC and preconditioning steps as DNs (digital numbers), and at that point we're totally in the digital realm. But prior to that, all sorts of interesting things happen.

For example, on Nikon cameras, Nikon uses their knowledge of the sensor color response to change the values that will become DNs for the blue and red channel. Why? Because the blue and red channels will be later shifted to provide "white balance" in your final image. What you want to try to avoid is having any really low blue or red channel value—e.g. red would be low in sky tones—because that will increase the appearance of random shot noise in that channel, and ultimately, the final image.

So, between the analog capture step and the ultimate production of a 12-bit or 14-bit data value (DN) in your file, all sorts of little things happen, and they vary considerably by manufacturer, even when using the same exact image sensor (some Sony and Nikon cameras use the same sensor, for example).

Likewise, different raw converters do different things with the DNs. The most obvious of those comes in interpretation of white balance. You may well notice that not a single raw converter agrees what the actual color temperature was when you shoot with Auto White Balance (and may not for preset values, either). That's because we get all kinds of other things entering the picture, including whether they recognize all the information in the EXIF portion of the raw file and what "color model" they use to do all their tonal/color shifting.

The interesting thing that I've noticed over the last 20 years is that dissertations keep disappearing. There's been a lot of work in this area by PhD students, but what happens is that if it is useful, it becomes commercialized and the PhD dissertation that outlines that seems to disappear from easily found Internet resources. So, on top of the fact that there's a lot going on, everyone's trying to keep their secret sauce secret, too.
 


Ric Ford

MacInTouch
Additionally, some sensor systems involve taking many readings of each element over time. So each sensor element doesn't have one value but has a stream of values (which may be very large, depending on your exposure time). Although there may be some cameras that store all of these per-element streams in the raw file, I would expect most to perform some operation on the stream to produce a fixed-size amount of per-element data...
Sensor read-out speed and data capture/storage speed are critical issues, affecting:
  • video resolution, frame rates and compression quality
  • still image burst rates and length
  • rolling shutter that can wreak havoc on still photos in LED lighting and create problems with moving subjects
Meanwhile, Apple captures multiple images behind the scenes on an iPhone, and speed is an issue here, too, as moving objects will suffer from visible problems in the combined image. (You can compare saved HDR and non-HDR images with moving subjects on an iPhone to see the problems involved.) Scroll down in this article for related discussion.
DXOMark said:
Apple iPhone 7 camera review: better than ever

Apple’s multi-image assembly is actually a low-effort alternative to RAW

The traditional definition and purpose of a RAW image has been challenged by the multi-frame algorithm described by Apple as part of the iPhone 7 announcement.

Because it starts from a set of 3 to 7 RAW images (Apple did not disclose the exact number), the native Apple camera application already has a giant lead on third-party RAW processing applications for the iPhone. When the user presses the shutter to capture an image, the smartphone’s processing chipset selects the best reference frame using a variety of criteria, including face detection and focus quality. It then merges that image with neighboring frames to improve the dynamic range and reduce noise.
 


I need to scan (with a MacBook Air 2011 running High Sierra) several hundred photos and slides this fall. They are old family photos, and the quality isn’t superb to begin with. Slides are Kodachrome. I would like advice about choosing a scanner. The results may never make it off a computer monitor onto paper. I am sure that a scanner costing under $200 is more than adequate. I tend toward the Epson V600.
Is a dedicated film scanner for the slides enough better than a flatbed scanner to be worth the extra cost of buying a second scanner? If the flatbed doesn’t need to do slides, I could spend less on it.
Before spending a few hundred dollars on equipment, definitely ask yourself how much your time is worth and how likely you actually are to finish scanning everything by yourself. A lot of photography stores offer digitization services at very reasonable prices, typically well under $1/image in bulk. For example, here is the price list for a reputable shop in my area of Connecticut:

I'm pretty picky about tweaking images, and the service I used did a good job for the vast majority of photos I had. For those few images that I felt needed some more adjustment, it wasn't a problem for me to go back to the source material and manually rescan and edit them.

For me, the real issue was the time saved by having a service do the scanning. I had ambitions of doing all the work by myself, and I started the project using my own equipment, but it quickly became clear I'd never finish the project on my own, as everyday distractions competed for my attention. It's far better to have someone else do most of the work and then, if needed, manually redo a handful of photos needing special attention than it is to fail to complete the work on your own.

FWIW, the equipment that I used for my manual scans, which I do like quite a bit, was an Epson Perfection V700 and an Epson FastFoto FF-640. The FF-640 was terrific at scanning routine, consumer-level prints very quickly and at quite good quality, but it's an automatic feed device, so it is not suitable for scanning prints that are too stiff/flimsy or unusually sized. It has a very nice integration with Dropbox and other file storage systems.

My only real criticisms of the FF-640 are its price (a little high, but not unreasonable, in my opinion), its sensitivity to dust (for good output, make sure that the scan path is clean and that source photos are relatively dust-free; keep the scanner covered when not in use), and its non-configurable auto-launching of the Epson software upon login (you can disable this using launchctl or a launch control utility like Lingon X without interfering with normal operation).

As for the Perfection V700, it's an excellent scanner, and it does a very reasonable job of scanning slides with its slide adapter, though it can be quite time-consuming. I understand that the V600 works similarly. There are a few decent tutorial videos on YouTube that might be worth checking out.
 




Amazon disclaimer:
As an Amazon Associate I earn from qualifying purchases.

Latest posts