The science
The Science Behind PaletteReveal
PaletteReveal is built on a hundred years of color science, half a century of personal-color-analysis practice, and a small Node engine that does perceptual color math. This page explains what's actually happening when you click skin samples, in plain English — but with the real math.
1. Why personal color analysis works at all
Your skin reflects light. The light it reflects has a specific spectral signature determined by melanin, hemoglobin, sub-dermal scatter, and the surface layer's keratin. That signature is reasonably stable for any individual under daylight — and it's what determines which colors of clothing or makeup near your face will complement the reflected light versus compete with it.
When a color clashes with your skin's reflectance ratio, your visual system sees the conflict as desaturation, dullness, or "tiredness". When a color complements it, the same visual system reads brightness, clarity, and life. None of this is subjective in the abstract — it's a real optical phenomenon. The challenge is identifying your reflectance signature efficiently.
2. The four characteristics
Modern personal color analysis (the "Sci/Art" 12-season system) describes any individual's coloring along three axes:
- Undertone — warm, cool, neutral, or olive. The temperature underneath your surface tone.
- Value (contrast / depth) — light, medium, or deep. How much luminance there is between your skin, hair, and eyes.
- Chroma (clarity) — bright (high chroma, clear features) or soft (low chroma, blended features).
Combine these and you land in one of the twelve seasons. The challenge is measuring each axis from a photograph reliably.
3. Why we use CIE Lab, not RGB
RGB is how computers store color, but it's not how human vision perceives color. In RGB, equal numerical changes don't correspond to equal perceptual changes — bumping R from 200 to 210 looks very different than bumping B from 50 to 60.
The CIE 1976 Lab color space (often written L*a*b*) was specifically designed so that distance in the space matches distance as your eye perceives it. Three axes:
- L: lightness, 0 (black) to 100 (white)
- a: red ↔ green, roughly −128 to +127 (positive = red, negative = green)
- b: yellow ↔ blue, roughly −128 to +127 (positive = yellow, negative = blue)
For skin tones, a is almost always slightly positive (red component), and b is almost always positive (skin is yellow-leaning relative to a perfect grey). Different people's skin sits in a relatively narrow region of Lab space — but the differences within that region are what we need to capture, and Lab makes those differences perceptually meaningful.
The conversion formula (high level)
RGB → Lab is a two-step process:
- RGB → XYZ via gamma decoding and the standard sRGB → CIE XYZ matrix.
- XYZ → Lab via a non-linear cube-root function applied to ratios with the D65 reference white.
If you're curious, the engine code lives in app.js as rgbToLab. It's about 12 lines of code.
4. Sampling your skin (native resolution)
When you click a skin point, the engine does not sample the displayed (downscaled) preview canvas — it samples the original image at native resolution. Why? A 4000×3000 phone photo gets fitted into a 640×480 preview canvas, which is a ~6× downsample. Vein lines, freckles, and sub-pixel skin variation all get blurred into pixel averages at preview resolution, so you'd be sampling already-averaged data.
Instead, the engine:
- Loads your full-resolution photo into a hidden canvas (
state.nativeCanvas). - Translates your click from preview coords to native coords using the centered fit transform.
- Samples a small radius (scaled to the native resolution — bigger photos = bigger sample window so the same physical area is averaged) and computes the average RGB of that patch.
This recovers all the detail your camera captured, before any display resampling.
5. The "robust skin colors" filter
Even with native sampling, you might accidentally click on a freckle, a shadow, an eyebrow hair, a tear-duct highlight, or a strand of hair. The engine filters samples through robustSkinColors, which:
- Drops pixels that are too dark (brightness < 55) or too bright (> 242) — those are shadows and blown highlights, not skin.
- Drops pixels with HSL saturation > 0.58 — those are likely lipstick, blush, or eye makeup.
- Drops pixels where the red channel isn't dominant over green and blue (
r < g × 0.72orr < b × 0.68) — non-skin colors.
After the filter, a median-distance trim removes outliers. The remaining samples are averaged to produce the "robust skin" Lab signature.
6. Classifying undertone — the Lab b-axis
The single most reliable signal for warm vs cool undertone is the Lab b-axis (yellow ↔ blue). Higher b = more yellow = warmer; lower b = less yellow = cooler.
Empirically, after testing hundreds of skin samples, the engine uses these thresholds:
| Lab b | Lab a | Classification |
|---|---|---|
| ≥ 21 | — | Warm |
| 17 – 20 | — | Warm-neutral |
| 14 – 16 | — | Cool-neutral |
| ≤ 13 | — | Cool |
| 17 – 22 | < 13 (low red) | Olive |
The olive case is special: olive skin has moderately high b (yellow component) but unusually low a (less red) — the classic green-yellow tilt. We test the joint condition b > 17 AND a < 13 AND b − a > 5 to catch it.
Earlier versions of the engine used an RGB-based "warm bias" formula (r + g × 0.72 − b × 1.28). It classified almost every skin tone as warm. The Lab b-axis fixed that.
7. Contrast and clarity
Contrast comes directly from Lab L:
- L < 50 → deep
- 50 ≤ L ≤ 75 → medium
- L > 75 → light
Clarity comes from HSL saturation (a separate color space we compute alongside Lab):
- S > 0.34 → bright/clear
- S < 0.18 → soft/muted
- otherwise → balanced
8. Ranking the 12 seasons
Each season has a target Lab signature representing the "average" person of that season:
| Season | Target L | Target b |
|---|---|---|
| Light Spring | 80 | 22 |
| True Spring | 72 | 26 |
| Bright Spring | 70 | 22 |
| Light Summer | 78 | 12 |
| True Summer | 70 | 11 |
| Soft Summer | 67 | 13 |
| Soft Autumn | 66 | 22 |
| True Autumn | 60 | 27 |
| Dark Autumn | 50 | 22 |
| Dark Winter | 50 | 13 |
| True Winter | 60 | 9 |
| Bright Winter | 60 | 13 |
For your sample, the engine computes a score for each season:
score = skinDistance × 0.55
+ |yourL − targetL| × 0.55
+ |yourB − targetB| × 1.4
+ (warmLike && family∈{Summer,Winter} ? 14 : 0)
+ (coolLike && family∈{Spring,Autumn} ? 14 : 0)
+ (light penalty / dark penalty / soft / bright clauses…)
where skinDistance is the smallest Lab Δ-E to any color in that season's reference skin gradient, weighted by 0.55. The b-axis has a higher weight (1.4) than L (0.55) because warmth differentiates seasons more strongly than depth.
Lower score wins. The winning season is shown as the result; you can also pick any season manually from the dropdown to see its palette flexed against your skin.
9. The wrist-vein chroma extraction
The vein analyzer is a separate tool. Veins on a wrist photo are typically 1–2 pixels wide on a phone-camera image, with only 5–15 RGB units of contrast against surrounding skin. A naive "click and average" sample returns mostly skin tone — useless.
Our solution: when you open the zoom modal and toggle "Show vein contrast", the engine does this:
- Reads the rendered zoom canvas (400×400 px of native image data).
- Computes the mean RGB across all visible pixels.
- Re-computes the mean using only pixels brighter than the first mean — this excludes the dim vein pixels and produces a clean "skin baseline".
- For every pixel, computes
(r − meanR, g − meanG, b − meanB) × 5, centered around mid-grey (128, 128, 128). - Replaces the canvas with this amplified deviation image.
Result: skin areas become near-grey, and any pixel that deviates from skin (vein lines, capillaries, freckles) is amplified by 5× into visible color. Real blue veins surface as bright cyan; warm veins as olive-yellow. The eye finally sees what was buried in JPEG noise.
The pixel under the crosshair is read either from the original photo (Photo mode) or from the amplified canvas (Contrast mode). The classifier uses whichever you confirmed.
10. Personalization within a season
When you pick (or are auto-classified into) a season, the engine has options for how to render the canonical palette colors:
- Pure canonical — Bright Winter Magenta is always
#D1008Ffor everyone. Simple and consistent. - Skin-tilted — shift each palette color's Lab values by a fraction of
(your skin Lab − season-typical skin Lab). This produces visibly different "Bright Winter Magenta" hexes for two different individuals.
We chose option 1 (canonical). The Bright Winter palette is a fixed character — Magenta, Cobalt, Icy Lemon, Hot Pink, Ink Black, Bright Emerald — and tilting those colors per-individual would erode the season's identity. What does personalize per individual is the skin-tone shade range (your own skin in 6 steps light-to-dark, computed from your sampled Lab values) and the makeup palette (which physically sits on skin and benefits from per-individual tuning).
11. Limitations
Photo-based color analysis has hard limits. We're honest about them:
- Lighting dominates. Indoor warm bulbs (2700 K) shift Lab b by 5–8 units — enough to flip a Cool result into Warm-neutral. Daylight is the only reliable input.
- JPEG chroma subsampling (4:2:0) halves color resolution. Subtle vein color shifts get averaged into surrounding skin before we see them.
- Camera "auto-enhance" / Portrait mode warms skin to look "nicer". A Cool person photographed in Portrait mode often classifies as Warm-neutral.
- Foundation / makeup masks the underlying skin. Always sample bare skin areas.
- Sampling depends on user precision. Click on a freckle, shadow, or shine and the result skews. The robust filter helps but doesn't save bad input.
- The season system itself is a discrete classifier. Real human coloring is continuous; we slot you into one of 12 buckets. Some people sit between two seasons ("flow"), and the engine returns the closer one without flagging the ambiguity.
- No face-segmentation. The current engine doesn't automatically detect skin areas — you click them. Future versions could use ML face-parsing to remove this manual step.
For these reasons, we recommend treating the result as a thoughtful starting point. An in-person consultation with a trained color analyst, in proper daylight with physical drape fabrics, is still the gold standard for definitive analysis.
12. The historical lineage
None of this is new. The intellectual roots:
- Johannes Itten (1888–1967), Bauhaus color theorist, observed that artists' personal coloring tends to align with their preferred palette. His four-season metaphor sits at the foundation of modern personal color analysis.
- Suzanne Caygill (1911–1994) developed the modern personal-color framework in 1940s–80s America.
- Carole Jackson's Color Me Beautiful (1980) popularized the four-season system to the general public.
- Christine Scaman, Kathryn Kalisz, and the "Sci/Art" school (1990s–) refined the framework into the 12-season system PaletteReveal uses, accounting for variations in lightness, depth, and brightness.
- CIE (the International Commission on Illumination) defined the perceptually-uniform Lab color space in 1976, which makes the math behind PaletteReveal possible.
13. Where to read more
- What Is Seasonal Color Analysis? — beginner-friendly walkthrough
- Warm vs Cool Undertones — the five practical tests
- Best Photo for Color Analysis — how to feed the engine well
- Why Some Colors Make You Look Tired — the optical conflict, explained
- CIE Lab on Wikipedia — the academic reference for the color space we use
Try the engine Read the guides
This page documents the engine as of May 2026. Algorithms are revised regularly; check back for updates. PaletteReveal provides stylistic guidance only — not medical, biometric, or professional analysis.