The science

The Science Behind PaletteReveal

A complete walkthrough of how the engine works. Updated 2 May 2026 · 12-minute read.

PaletteReveal is built on a hundred years of color science, half a century of personal-color-analysis practice, and a small Node engine that does perceptual color math. This page explains what's actually happening when you click skin samples, in plain English — but with the real math.

Quick answer We sample your skin at native resolution, convert RGB to CIE Lab (a perceptually uniform color space), use the b-axis (yellow ↔ blue) to classify undertone and the L-axis (lightness) to classify contrast, then rank the twelve seasonal palettes by perceptual distance from your individual signature. The result is the season whose archetypal Lab values are closest to yours, with penalties for warm/cool family mismatch.

1. Why personal color analysis works at all

Your skin reflects light. The light it reflects has a specific spectral signature determined by melanin, hemoglobin, sub-dermal scatter, and the surface layer's keratin. That signature is reasonably stable for any individual under daylight — and it's what determines which colors of clothing or makeup near your face will complement the reflected light versus compete with it.

When a color clashes with your skin's reflectance ratio, your visual system sees the conflict as desaturation, dullness, or "tiredness". When a color complements it, the same visual system reads brightness, clarity, and life. None of this is subjective in the abstract — it's a real optical phenomenon. The challenge is identifying your reflectance signature efficiently.

2. The four characteristics

Modern personal color analysis (the "Sci/Art" 12-season system) describes any individual's coloring along three axes:

Combine these and you land in one of the twelve seasons. The challenge is measuring each axis from a photograph reliably.

3. Why we use CIE Lab, not RGB

RGB is how computers store color, but it's not how human vision perceives color. In RGB, equal numerical changes don't correspond to equal perceptual changes — bumping R from 200 to 210 looks very different than bumping B from 50 to 60.

The CIE 1976 Lab color space (often written L*a*b*) was specifically designed so that distance in the space matches distance as your eye perceives it. Three axes:

For skin tones, a is almost always slightly positive (red component), and b is almost always positive (skin is yellow-leaning relative to a perfect grey). Different people's skin sits in a relatively narrow region of Lab space — but the differences within that region are what we need to capture, and Lab makes those differences perceptually meaningful.

The conversion formula (high level)

RGB → Lab is a two-step process:

  1. RGB → XYZ via gamma decoding and the standard sRGB → CIE XYZ matrix.
  2. XYZ → Lab via a non-linear cube-root function applied to ratios with the D65 reference white.

If you're curious, the engine code lives in app.js as rgbToLab. It's about 12 lines of code.

4. Sampling your skin (native resolution)

When you click a skin point, the engine does not sample the displayed (downscaled) preview canvas — it samples the original image at native resolution. Why? A 4000×3000 phone photo gets fitted into a 640×480 preview canvas, which is a ~6× downsample. Vein lines, freckles, and sub-pixel skin variation all get blurred into pixel averages at preview resolution, so you'd be sampling already-averaged data.

Instead, the engine:

  1. Loads your full-resolution photo into a hidden canvas (state.nativeCanvas).
  2. Translates your click from preview coords to native coords using the centered fit transform.
  3. Samples a small radius (scaled to the native resolution — bigger photos = bigger sample window so the same physical area is averaged) and computes the average RGB of that patch.

This recovers all the detail your camera captured, before any display resampling.

5. The "robust skin colors" filter

Even with native sampling, you might accidentally click on a freckle, a shadow, an eyebrow hair, a tear-duct highlight, or a strand of hair. The engine filters samples through robustSkinColors, which:

After the filter, a median-distance trim removes outliers. The remaining samples are averaged to produce the "robust skin" Lab signature.

6. Classifying undertone — the Lab b-axis

The single most reliable signal for warm vs cool undertone is the Lab b-axis (yellow ↔ blue). Higher b = more yellow = warmer; lower b = less yellow = cooler.

Empirically, after testing hundreds of skin samples, the engine uses these thresholds:

Lab bLab aClassification
≥ 21—Warm
17 – 20—Warm-neutral
14 – 16—Cool-neutral
≤ 13—Cool
17 – 22< 13 (low red)Olive

The olive case is special: olive skin has moderately high b (yellow component) but unusually low a (less red) — the classic green-yellow tilt. We test the joint condition b > 17 AND a < 13 AND b − a > 5 to catch it.

Earlier versions of the engine used an RGB-based "warm bias" formula (r + g × 0.72 − b × 1.28). It classified almost every skin tone as warm. The Lab b-axis fixed that.

7. Contrast and clarity

Contrast comes directly from Lab L:

Clarity comes from HSL saturation (a separate color space we compute alongside Lab):

8. Ranking the 12 seasons

Each season has a target Lab signature representing the "average" person of that season:

SeasonTarget LTarget b
Light Spring8022
True Spring7226
Bright Spring7022
Light Summer7812
True Summer7011
Soft Summer6713
Soft Autumn6622
True Autumn6027
Dark Autumn5022
Dark Winter5013
True Winter609
Bright Winter6013

For your sample, the engine computes a score for each season:

score = skinDistance × 0.55
      + |yourL − targetL| × 0.55
      + |yourB − targetB| × 1.4
      + (warmLike && family∈{Summer,Winter} ? 14 : 0)
      + (coolLike && family∈{Spring,Autumn} ? 14 : 0)
      + (light penalty / dark penalty / soft / bright clauses…)

where skinDistance is the smallest Lab Δ-E to any color in that season's reference skin gradient, weighted by 0.55. The b-axis has a higher weight (1.4) than L (0.55) because warmth differentiates seasons more strongly than depth.

Lower score wins. The winning season is shown as the result; you can also pick any season manually from the dropdown to see its palette flexed against your skin.

9. The wrist-vein chroma extraction

The vein analyzer is a separate tool. Veins on a wrist photo are typically 1–2 pixels wide on a phone-camera image, with only 5–15 RGB units of contrast against surrounding skin. A naive "click and average" sample returns mostly skin tone — useless.

Our solution: when you open the zoom modal and toggle "Show vein contrast", the engine does this:

  1. Reads the rendered zoom canvas (400×400 px of native image data).
  2. Computes the mean RGB across all visible pixels.
  3. Re-computes the mean using only pixels brighter than the first mean — this excludes the dim vein pixels and produces a clean "skin baseline".
  4. For every pixel, computes (r − meanR, g − meanG, b − meanB) × 5, centered around mid-grey (128, 128, 128).
  5. Replaces the canvas with this amplified deviation image.

Result: skin areas become near-grey, and any pixel that deviates from skin (vein lines, capillaries, freckles) is amplified by 5× into visible color. Real blue veins surface as bright cyan; warm veins as olive-yellow. The eye finally sees what was buried in JPEG noise.

The pixel under the crosshair is read either from the original photo (Photo mode) or from the amplified canvas (Contrast mode). The classifier uses whichever you confirmed.

10. Personalization within a season

When you pick (or are auto-classified into) a season, the engine has options for how to render the canonical palette colors:

  1. Pure canonical — Bright Winter Magenta is always #D1008F for everyone. Simple and consistent.
  2. Skin-tilted — shift each palette color's Lab values by a fraction of (your skin Lab − season-typical skin Lab). This produces visibly different "Bright Winter Magenta" hexes for two different individuals.

We chose option 1 (canonical). The Bright Winter palette is a fixed character — Magenta, Cobalt, Icy Lemon, Hot Pink, Ink Black, Bright Emerald — and tilting those colors per-individual would erode the season's identity. What does personalize per individual is the skin-tone shade range (your own skin in 6 steps light-to-dark, computed from your sampled Lab values) and the makeup palette (which physically sits on skin and benefits from per-individual tuning).

11. Limitations

Photo-based color analysis has hard limits. We're honest about them:

For these reasons, we recommend treating the result as a thoughtful starting point. An in-person consultation with a trained color analyst, in proper daylight with physical drape fabrics, is still the gold standard for definitive analysis.

12. The historical lineage

None of this is new. The intellectual roots:

13. Where to read more

Try the engine Read the guides

This page documents the engine as of May 2026. Algorithms are revised regularly; check back for updates. PaletteReveal provides stylistic guidance only — not medical, biometric, or professional analysis.