PNG 5th Edition Roadmap

TPAC 2025

These slides can be found at programmax.net/talks/png-5th-edition-roadmap

Thank you

  • W3C
  • Hakim El Hattab, for reveal.js (slide software)
  • Sandflow Consulting LLC, for the Pareto front image
  • Brooke Vibber, for parallel PNG diagrams

Copyright notices are below.

General themes:

  • 3rd Edition—Bring it up to date
    • HDR, animation, …
  • 4th Edition—Compatibility
    • HDR image on SDR display
  • 5th Edition—Improve compression
A timeline of PNG spec development

Work on 4th and 5th Editions is being done in parallel.

Developers & users want better compression

Prior to 3rd Edition's launch, several people filed issues for ways to improve compression.

After 3rd Edition launched, we received lots of community feedback wanting improved compression.

Not only file size

(De)Compression speed is also important, which causes a Pareto front.

A graph of various image decoders showing file size compared to decode speed

Compressor improvement categories:

  • No spec changes needed
  • Small, simple additions
  • Update existing carve-outs
  • Radical changes

No spec changes

  • This is effectively free for W3C, who primarily focuses on specs, not implementations.
  • But people see and feel implementations.
    • For a spec to thrive, we need to care about the whole pipeline.
    • Some work is justified.

No spec change—parallelization

Background

  • PNG uses the DEFLATE compression algorithm (often zlib implementation).
  • DEFLATE builds a history as it decodes
    • Inherently serial
    • Cannot jump to the middle
  • DEFLATE supports restart markers, which abandon history.

pigz

pig-zee is written by the original zlib author to add parallelization.

pigz header

mtpng uses the pigz approach for PNGs

Parallel PNG writing diagram

Notice, only 2 threads for decoding

Parallel PNG reading diagram

Small spec addition—N-thread decoding

New PNG chunk

A new PNG chunk tracks DEFLATE restart marker positions.

Threads start decoding at the restart markers.

  • Cheap
  • Easy
  • Backwards compatible

Open questions

  • How small / large should a piece be?
    • Too small = compression impacted
  • How fast are decodes?
    • Fast decoding = fewer threads in flight
    • Slow network = threads unutilized

Use cases

  • Browsers process packets upon arrival.
  • Non-browser programs ~always load the entire file before processing.

Research to answer the questions

A GitHub page for FileReadSpeedTest

Research to answer the questions

A screenshot of ImageInternals

Existing carve-outs

Add other compression methods

From the PNG spec, Other values of compression method are reserved for future standardization.

Research needed

  • How do those compressors compare?
  • Is the implementation burden worth it?

Radical changes

Uphill battle

Everything up to this point likely came across as reasonable.

After this point, not so much.

But remember, hearts & minds of end users

Compression is really 3 things

  • Compression algorithm
  • Tuning the data to the compressor
  • Tuning the data to the end user

PNG is well known for being lossless

  • Only 2/3rds of those benefits
    • Cannot tune to end user
  • File size will never compete
  • Web is often final presentation (end user)

Fundamental goal?

Is PNG an archival format? Lossless makes sense.

Is PNG an end user web format? Lossy makes snese.

It can be both

WebP and JpegXL support both lossy and lossless.

The way they do lossless is nearly identical to how PNG works.

Lossy is the default

Screenshots are perhaps the only truly lossless data source.

Imperfect camera sensors, color quantization, pixel aliasing, etc. all contribute to loss.

Too different? New format?

If PNG is known as lossless, perhaps this fits into PNG2.

Why compete?

Perhaps we could simply improve WebP / JpegXL.

But PNG will struggle for hearts & minds.

Q&A