type
Post
status
Published
date
Aug 2, 2024
slug
ISP
summary
tags
study
tech
category
Tech
icon
password
Color tutorialsColor and color spacesSpectral power distribution (SPD)Tristimulus color theoryRadiometry vs. photometry/colorimetryLuminance-chromaticity space (CIE xyY)Color constancy and color temperatureColor constancyColor temperatureColor model versus color spaceIn-camera rending pipelineCameraImage signal processor (ISP)Camera sensor
Color tutorials
Color and color spaces
The most common color space is CIE XYZ, but before we get into that, we first need a more basic view of how we perceive the colors.
Spectral power distribution (SPD)
Every color (visible spectrum) has a special combined wavelength, this can be described by a spectral power distribution (SPD) shown below. The SPD plot shows the relative amount of each wavelength reflected over the visible spectrum.

Due to the accumulation effect of the cones, two different SPDs can be perceived as the same color (such SPDs are called “metamers”).
Tristimulus color theory
People found that all color could be expressed as a linear combination of three primary color. Grassman’s Law states that a source color can be matched by a linear combination of three independent “primaries”.

Radiometry vs. photometry/colorimetry
Radiometry
- Quantitative measurements of radiant energy.
- Often shown as spectral power distributions (SPD).
- Measures light coming from a source (radiance) or light falling on a surface (irradiance).
Photometry/ colorimetry (a psychophysical)
- Quantitative measurement of perceived radiant energy based on human’s sensitivity tolight.
- Perceived in terms of “brightness” (photometry) and color (colorimetry).
Now we face the problem to bridging up the gap between the radiometry and photometry/colorimetry. How can we do that? Well in the flicker experiments, we are finally able to measure how we perceive different color of different SPD byb their brightness.

We invert the result, and we shall get what is called CIE (1924) Photopic luminosity function

The Luminosity Function (written as y(λ) or V(λ)) shows the eye’s sensitivity to radiant energy into luminous energy (or perceived radiant energy) based on human experiments (flicker fusion test).
And finally, we can define a quantity to measure the perceived brightness of different light. That is

And now let’s figure how to go from radiometry to colorimetry. Based on tristimulus color theory, colorimetry attempts to quantify all visible colors in terms of a standard set of primaries. So by hiring “standard observers”, we can easily carry out experiments and thus set up a measurement for our purpose.

Noticeable that there are negative values in, we can apply an linear transformation to avoid this.
In 1931, the CIE met and approved defining a new canonical basis, termed XYZ that would be derived from Wright-Guild’s CIE RGB data.
- Properties desired in this conversion:
- Positive values only
- Pure white light (flat SPD) to lie at X=1/3, Y=1/3, Z=1/3
- Y would be the luminosity function (V(λ))
- Quite a bit of freedom in selecting the XYZ basis
- In the end, the adopted transform was:


To transfer from SPD to CIE 1931 XYZ we have to carry out three integrates respectively
CIE 1931 XYZ is one of the most common color space we gonna mention later.
Luminance-chromaticity space (CIE xyY)
Sometimes it is useful to discuss color in terms of luminance (perceived brightness) and chromaticity (we can think of as the hue-saturation combined). CIE xyY space is used for this purpose.

A few things you need to know about this image:
Color constancy and color temperature
Color constancy
Our visual system has an amazing ability to compensate for environmental illumination such that objects are perceived as the same color.
Color constancy (chromatic adaptation) is the ability of the human visual system to adapt to scene illumination. This ability is not perfect, but it works fairly well. Image sensors do not have this ability! We will discuss this in part 2 . . this is related to the camera’s white-balance module.
We can use The Von Kries transform to discribe this kind of adaptation

And it’s noticeable that people always adapts printed media rather than emissive media.
All we have discussed about indicates that
- Color is intimately connected to scene illumination.
- Even for emissive displays, we have to consider (or make assumptions) about the illumination in the viewing environment of the display.
- Keep this in mind because it will play a role when we define color spaces used to encode our images.
Color temperature
To better describe color of different illuminant, we introduce color temperature. In the photography and display communities, an illumination’s “color” is described using a correlated color temperature (CCT). This is an excellent example of where metamers are used.
As mentioned, illuminants are often described by their “color temperature.” This mapping is based on theoretical blackbody radiators that produce SPDs for a given temperature expressed in Kelvin (K). We map light sources (both real and synthetic) to their closest color temperature.


We can plot visible SPDs in CIE xy chromaticity

How to check the CCT of a given illuminant is simple, just do as the following steps:
- Find the light sources SPD mapping to CIE XYZ using the CIE 1931 mapping functions.
- Project the CIE xyY value to the Planckian locus line.
An example of an OLED light is here

Where the projection falls is the Correlated Color Temperature (CCT) of this light source. So, in this example, the OLED light source is roughly 4500K.
While we often say "color temperature", we should say "correlated color temperature.” The concept is not always related to the physical temperature of the light source, but its correlation with the black body radiator's color temperature.
After we defined the CCT, we can easily define the White point. A white point is a color defined in CIE xyY that we want to be considered “white” (or achromatic/neutral). This is essentially an illuminant’s SPD in terms of CIE XYZ/CIE xyY. Think of it as CIE Yxy value of a white piece of paper under some illumination.

Summary
Color constancy is our ability to adapt to illumination in the scene.
Correlated Color Temperature (CCT) — or just color temperature — is a system used to describe scene illumination.
Note: we must factor in the scene illumination when capturing and displaying color images.
Color model versus color space
A color model is a mathematical system for describing a color as a tuple of numbers (RGB, HSV, HSL, more. . . )
A color space is a specific range of colors within a color model. The range of color (gamut) can be expressed in CIE XYZ. Color spaces typically also define the viewing environment and, therefore, the “white point” of the space

We must understood that no specific color model can fully cover up the CIE XYZ, but just part of it, and we must realize that different definition of white point define a different color model, which implies the existence of a vast range different color models.
In 1996, Microsoft and HP defined a set of “standard” RGB primaries. R=CIE xyY (0.64, 0.33, 0.2126) G=CIE xyY (0.30, 0.60, 0.7153) B=CIE xyY (0.15, 0.06, 0.0721). This was considered an RGB space achievable by most devices at the time. The white point was set to the D65 illuminant. This is an important to note. It means sRGB has built in the assumed viewing condition (6500K daylight).
A matrix decides the transformation from CIE XYZ to the sRGB, which is

Before introducing someting new, let’s grab a look at Stevens' power law

That tells us, if we want our color seemed more natural, we need it to be in-linear, that’s why we have our gamma curve

The actual formula is a bit complicated, but effectively this is gamma () where I’ is the output intensity and I is the linear sRGB ranged 0-1, with a small linear transfer for linearized sRGB values close to 0 (not shown in this plot). This is known as “perceptual encoding” and is intended to allocate more bits based on our nonlinear response to radiant power.

Generally speaking, the relation is like a clan of color spaces

At last we introduce a little more common color spaces.
- CIE Lab
- Others to be aware of

In-camera rending pipeline

Camera
The image directly captured from the camera’s sensor needs to be processed. We can call this process “rendering,” as the goal is to render a digital image suitable for viewing.

Image signal processor (ISP)
An ISP is dedicated hardware that renders the sensor image to produce the final output.
Companies such as Qualcomm, HiSilicon, Intel (and more) sell ISP chips (often as part of a System on a Chip – SoC). Companies can customize the ISP. Many ISPs now have neural processing units (NPUs).
Camera sensor
Almost all consumer camera sensors are based on complementary metaloxide-semiconductor (CMOS) technology.
We generally describe sensors in terms of number of pixels and size. The larger the sensor, the better the noise performance as more light can fall on each pixel. Smart phones have small sensors!
Camera uses a certain pattern of sensors to catch photons.

The color filter array (CFA) on the camera filters the light into three sensor-specific RGB primaries.