Sorting out how to display large photos on the new iPad’s Retina display over the last few days has been a lot of fun, and figuring out that progressive JPEG would do the trick is nifty, but I want to stress that this was exploration into how to do it, not necessarily into how to do it well. One of the reactions that I’ve seen in more than a few places could be paraphrased as: “Great! we’ll just start using 2x progressive JPEGs everywhere and all will be fine.”
As a Scot might say: that’d be a weeeee bit premature.
First off, progressive JPEGs… Really? We used to use them on the web—along with interlaced GIFs, remember those?—when the majority of people were using dialup connections. The immediate need for this fell away over time as bandwidth went up and frankly, I’m not sure that the effort needed to really look at if it’s a good idea to go back to using them or to change workflows to use tools that make them to is even warranted. As Chuq von Rospach points out, ImageMagick creates progressive JPEGs by default, but Aperture, Lightroom, and many other tools can’t even write them.
My gut tells me that the issue we’re seeing with the display of JPEGs that are greater than 2 megapixels in size on the iPad is an artificial limitation based on a set of expectations coded in for pre-Retina displays. That gut feel is encapsulated in the filename I saved my first post on this topic as: “webkitretinabug”. It’s not a bug, per se. The limit is obviously intentionally coded in and it’s working as designed. But that limit prevents display of full-screen size baseline JPEGs in WebKit. Since this interferes with the full use of the Retina display and Apple’s own published WebKit iOS resource limits state that the maximum size for canvas elements is 5 megapixels on devices with 256MB or more of RAM, I can’t imagine what we’re seeing is anything except an oops that fell through the cracks.
Speculating further, I’d be willing to bet that the only reason progressive JPEGs work when baseline JPEGs are subsampled down is that the code path is slightly different which bypasses the hard coded limit that the baseline JPEGs run into. I have no basis for this speculation, mind you. It’s just a wild ass hunch based on years of debugging weird issues in software.
Second, if progressive JPEG were the true golden path, I think we’d be seeing a lot more of them on Apple’s website. Digging around looking for 2x images—a slightly tedious task, really—I don’t find many progressive JPEGs. The ones I do find, such as the 2x resolution iPad hero image, are the larger ones that are bigger than 2 megapixels. Almost all of the other graphics used on the iPad page, for example, such as this 2x resolution side view of the iPad, are regular baseline JPEGs. Now, it’s not a strict division. I did find a 2x resolution image of the iBooks app that is less than 2 megapixels in size, but that one seems to be an exception to the rule.
My read of the tea leaves here is that if progressive JPEG was the way forward, we’d see a lot more of them on the iPad page. As it stands, it seems much more likely that the design team ran into the baseline JPEG issue, found the large image, and worked around it with the help of a little inside knowledge that progressive JPEG would work.
Third, the way decoding progressive JPEGs happens seems counter to the iOS core team’s philosophy of maximizing processing bang per watt of battery. In an email conversation with David Magda, he pointed at the progressive JPEG portion of the JPEG image compression FAQ. Here’s what it says about the progressive JPEG file format, emphasizing the most interesting line:
The advantage of progressive JPEG is that if an image is being viewed
on-the-fly as it is transmitted, one can see an approximation to the whole
image very quickly, with gradual improvement of quality as one waits longer; this is much nicer than a slow top-to-bottom display of the image. The disadvantage is that each scan takes about the same amount of computation to display as a whole baseline JPEG file would. So progressive JPEG only makes sense if one has a decoder that's fast compared to the communication link.
On any modern desktop, the extra computation is insignificant. But on a device where the development team is famously stingy with power consumption, steering things into a world where the web filled with images that take multiple decode passes to display seems out of character.
Speculation and conjecture? Certainly. With Apple’s famously tight-lipped communication practices and very little information from the mothership, however, it’s about all we have to go on right now. But, the bottom line is I think it’s too early to really make a blanket decision regarding a best practice here. In fact, establishing a practice now based on what is probably a bug is likely counterproductive.
What we need, in my opinion, is a bit of movement in the HTML and CSS specifications so that we can simply put a set of images into a list and let the client pick which one to download based on its own heuristics. The Responsive Images Community Group is looking at what to do in this direction and I hope they move quickly and surely to something that works well.
Until we get that, anything we do is really just a workaround. The trick will be to find workarounds that are both reasonable in the near term and easy enough to walk away from when we get something better.