Improved Color for the World Wide Web:
A Case Study in Color Management for
Distributed Digital Media
Distributed digital media need color management just as traditional printing does. However, no tools or practices exist for managing color on the World Wide Web. Consider a GIF image in a page of HTML on the World Wide Web. Pixel colors in the image are implicitly tied to characteristics -- such as phosphor chromaticity, gamma, and white point -- of the device on which it was created. Unless the display device miraculously happens to have exactly the same characteristics, image color will not be preserved.
Color management can solve this problem, even though the color characteristics of the display system are not, and cannot be, known at the time the Web page is created. A technique is presented to embed in the GIF image an International Color Consortium device color profile describing the source device. Armed with this and an ICC profile for the display device, the Web browser can then create and display a GIF image, in the device color space of the display device. The benefits of applying this technique and some of the pitfalls are discussed.
Why the Web Needs Color Management
The World Wide Web has been tremendously successful. Both the number of users and the amount of material available on it have grown at an astonishing rate. There hardly seems to be a reason to think about improving upon it. But as the Web grows, people are increasingly looking at the Web as a tool for commerce. And there we see one shortcoming of the Web today. Color reproduction on the Web is not nearly at the quality level needed for catalogue sales. There are products, such as lipstick and clothing, that are purchased primarily or exclusively because of their color. If that cannot be reproduced with accuracy rivalling that achieved on paper today, the Web cannot be used to replace or augment paper-based sales channels.
This paper describes a project to add color management support to the production and distribution of documents on the World Wide Web. The primary goal of the project was to improve the reproduction of color. Color management had to be added in a way that was compatible with Web browsers and pages that did not support color management. Further, existing established Web tools (in specific, the NetScape(TM) browser) were not to modified. I had to integrate cleanly into the Web as it now exists. Finally, within the earlier constraints, designs that would work well for personal computers were to be favored. The first constraint was imposed because of the large amount of legacy data and software. Popular software is like a weed: it may sprout rapidly; but it will not die off so quickly. The second constraint was imposed mostly to keep the scope of the project manageable and partly to enhance the chances for acceptance. People are used to the look and behavior of their tools. I wanted to improve them, but not to change them. The final weak constraint was imposed to lead toward a design that acknowledged that most of the current users of the Web are on machines with relatively slow network connections, and moderate amounts of main memory.
In order to explain the problem to be solved, the paper begins with a description of the contents of HTML pages and how Web browsers present these pages. Special attention is paid to GIF(C) image files. The solution space is constrained by the need for distributed color management, which is described next. Having set the stage, the bulk of the paper is devoted to explaining the solution in detail. After a discussion of the results of the project, areas for future research are outlined.
Presenting a Web Page
Pages on the Web are written in HTML, the HyperText Markup Language. (The specification for HTML is available, in HTML, on the Web at http://www.w3.org/hypertext/WWW/MarkUp/MarkUp.html) HTML consists of text, document structuring commands, and hypertext references. It is the job of the HTML browser to decide how to process hypertext references and to determine how to present the document based on the document structuring commands. (The term "present" is used because it sounds odd to speak of "displaying" audio data, which is often part of HTML pages.) Structuring commands denote parts of a document such as section headings, lists, extended quotations, and the body of the document. The browser determines how each of those sections is to be presented by selecting the appropriate font, page layout, and so on.
In the source file, hypertext references look like the token "http://" followed by something that resembles a Unix(TM) file path. These are called "universal reference locators" (URLs. The Web is worldwide, but the references are universal.) Again, the Web browser determines how each reference is handled. References to other HTML documents are usually handled by displaying some highlighted text. If that text is selected with a mouse, the current page is replaced with the referenced page. References to images are handled either by bringing up an image viewer or by displaying the image integrated into the document. References to audio files invoke a sound player; references to movie files, a movie player. The mechanism that allows browsers to determine how to handle references is very flexible, at least on Unix systems. It is that flexibility that allowed my project to succeed.
The Web uses the MIME format to transport referenced data across the Internet. This format tags the data with a "Content" field that indicates the type of the data. Unix-based Web browsers then use a "mailcap" file to determine how to present data based on its type. Users may provide their own mailcap file. Any data types not supported in the user's mailcap file will be searched for in a system default mailcap file. (Actually, there is a hierarchy of default files. The system search through a list of mailcap files, stopping when it encounters a rule covering the data type.) Any types still not recognized can not be presented by the Web browser. mailcap files are plain text files. Each line of the file specifies a media content type and the Unix program to be used to present that type. For example, my mailcap file contains the following lines:
So if an audio file is found, the program playaiff is invoked with the audio file as an argument. If an MPEG movie is found, movieplayer is invoked with the movie file as an argument. Figure 3 illustrates the decision tree used to determine how to present HTML.
Figure 1. Determining how to present HTML.
This scheme made it easier to port Web browsers to different Unix systems. The implementor of the browser does not have to write a new set of multimedia tools, or include them into the browser program. Different tools of equivalent functionality can be substituted on different vendors' systems or at the whim of the skilled user. (Unix systems are very big on catering to the whims of skilled users.) For example, Silicon Graphics provides multiple ways to view GIF images; among them are the applications xv and imgview. The default configuration of our system level mailcap file invokes imgview when presented with a GIF image file. But by putting the line:
in my mailcap file, I can choose to invoke the program xv instead.
There are two ways that images can be referenced in HTML. One is with the tag "IMG", which explicitly marks the referent as an image. The other way is with the tag "HREF", which can be used for any hypertext reference. This leaves it up to the MIME mechanism to determine the type of the reference and to the mailcap mechanism to determine how to present the referent. The default browser on our system, NetScape's Mozilla, recognizes "IMG" and displays the image in the same window as the rest of the document. This makes for a better looking document. The browser is not capable of recognizing when a generic HREF happens to reference an image file, such references are left to be handled by the mailcap mechanism. (There is an optional way to turn off the automatic integration of images, but it does not work in the current version.)
GIF Image Files
While any image type could be supported in an HTML document, the vast majority of images available on the Web are GIF (Graphics Interchange format (c) and (sm) ) images. GIF images are either bitonal or colored. A GIF image is a two dimensional array of pixels. The pixels do not directly represent colors, but are indices into a color lookup table. The size of a color lookup table must be a power of two between 1 and 8. In other words, color tables have between two and 256 colors. All the colors in the table are specified with three components: red, green, and blue. The components are each 8 bits deep. Thus, an image can have up to 256 colors from a palette of approximately 16 million.
The GIF format became the dominant one on the Web for a combination of historical and technical reasons. CompuServe has made an extensive effort to promote the format, and offers it for use without royalty fees. Technically, the file format is easy to produce. The color lookup table technology offers a 3:1 factor of compression. On top of that, the GIF file format uses the LZW compression algorithm to offer additional compression. The compression algorithm is well-documented and easy to implement.
Unfortunately, the format is also patented and requires licensing, which apparently came as something of a surprise to CompuServe. An effort to devise an open and license-free image format is under way and is being supported by CompuServe. This effort is called Portable Network Graphics (PNG) and is discussed below in the "Areas for Further Research" section.
For the purposes of this paper, the most important thing to remember about the GIF format is that all the pixels in images are described as red, green, and blue values. This is called an RGB color space. However, the meanings of "red," "green," and "blue" are not well defined. Most software treats it as the RGB space of the display monitor. This may or may not resemble the RGB space of the monitor on which the image was created, depending on the color characteristics of the phosphors on the two different monitors. Often images are acquired on desktop scanners. While these are RGB devices, the spectral response of the red, green, and blue primaries, the white point, and the tone response curves are often very different from that of a computer monitor. Whatever the source of the image, any resemblance between the display device's color space and the source's is purely fortuitous.
We have seen that Web pages contain a mixture of text and references to data in other formats such as images, movies, and sound. We have looked a little more closely at GIF image files. In doing so, we have seen that the color for those images is based on the device (monitor or scanner) that the image was created on, but that the exact meaning of the colors is available when the image is to be displayed. Next we are going to see how this problem relates other color management problems.
Distributed Color Management
Color management for the Web is different from more traditional pre-press color management. Most pre-press systems are tightly coupled. That is, the scanner, the monitor, and all output devices (such printers or proofers) are all running on the same system. This means that the color characteristics of the destination device are fixed at the time the image is acquired. There is not much difference between adjusting the color as it comes out of the scanner or adjusting it before it is output. See Figure 2.
Figure 2. A tightly coupled system. Colorimetric data for any device is available at any point.
The situation is similar when a large scale sheet- or web-fed printing press is the ultimate target. The press operator is required to maintain the press in close adherence to an industry standard such as SWOP or Euroscale. Once the standard and paper type are known, the system, although geographically distributed, is again tightly-coupled. The color management system can correct to the reference standard as early as desired. In fact, it is not uncommon for the scanner to output image files in the printer's CMYK color space.
In contrast with this, the Web is very loosely-coupled. The creator of a document does not know how many different systems will ultimately present that document, what type of devices will used to present the document, nor even what on what medium the document will be presented. Most Web browsers can print color images so an image may be viewed on a monitor, printed on paper, or both. And the number of different kinds of printer, paper, and ink used is impossible to calculate.
Colors can not be adjusted for final display as they are scanned. They cannot be adjusted at any point during document creation. The only time it can be done is at document presentation. That is the only time that the final presentation medium and device are determined. But just as the creator of the document does not know the color characteristics of the device on which the image will be displayed, the displayer of the document does not know the color characteristics of the device on which the image was created. See Figure 3.
Figure 3. A loosely coupled system. Colorimetric data cannot be moved between the source and destination machines.
The solution to this problem is to break it into two parts. At document creation time, some way must be provided to map the source device's color space into a well-known color space. At document presentation time, some way must be provided to map from the well-known color space into the display device's color space. The creator of the document does not need to know how it will be displayed or the color characteristics of devices on the other side of the Web. The person who presents the document knows the display device and now has enough information about the color space of the source image.
The best way to provide the mapping of the source device's color space is to embed the mapping information into the source image. It is possible to send two files over the Web, but it is quite likely that they will get separated at some time. If the information is part of the image, there is only one file to manage and much less chance of losing the color information.
This solution also works for documents that include more than one image and which may have been created on different devices. One document may reference many images which were created on different devices, each with its own device-dependent color space. The images should, of course, be color adjusted separately. So an overall document color profile would not be adequate. But each image can have its own embedded color information and the color adjustment can be done on each image. See Figure 4.
Figure 4. Embedded profiles in images. Colorimetric data from the source is available on the destination machine.
This is not the only possible approach. All the images could just be translated into a reference color space and stored in the document that way. This is how the tightly-coupled systems work: they move everything into SWOP CMYK (or CIELAB) and every image in the document is in the same color space. SWOP and Euroscale would not be appropriate spaces for GIF files, however, because GIF only supports RGB. The same incompatibility precludes storing the images in one of the CIE color spaces. None of the Web browsers support CMYK- or CIE-based image formats directly. Since compatibility with existing software was a project requirement, extending the GIF format by embedded color descriptions seemed like the best solution.
Another approach would be to provide all the image data in both the RGB color space, for compatibility with existing software, and in a CIE color space, to support color management. But this would double the size of the image, which is unacceptable since it is unnecessary. Instead, we store the data once in the traditional device-dependent RGB space, and then provide additional information to allow us to map from that space into a CIE space later on.
Color management for the Web presents an new paradigm for color management. The system used to create documents may be separated from the system used to present the documents. We solve the color management problem by embedding a description of the source device's color space in the image itself. Then the display system can map device colors into its own display device's color space. In this way, what the creator saw will be closely matched by what the ultimate reader will get.
Color Management for GIF
Since distributed color management requires a two part solution, two tools were needed to solve it: one to run on the source machine at document creation time, and one to run on the destination machine at document presentation time. At creation time, we needed a tool to tag a GIF image with a device color description. At display time, we needed a tool to process a tagged image and adjust the color for the display device. Each tool posed its own problems. For the tagging tool, the problem was to find a way to extend the GIF format in a manner compatible with existing tools. For the color adjustment tool, the problems were to find a way to intervene in the browser's image display process, to find a characterization of the display device's character space, and to perform the actual color adjustment. Obviously, the tagging and adjusting tools had to use the same method of describing the device color space.
First, I will present that common method for describing device color spaces, the ICC device profile. Then I will show how the device profiles can be embedded into GIF image files. Once that is possible, actually writing the tagging and adjustment tools is quite straightforward. The only remaining challenge was finding a opportunity to apply the adjustment tool within the browsing process.
Adjusting Colors: ICC Device profiles
I chose to use the International Color Consortium (ICC) device profiles, both for the embedded characterization of the source device's color space and for the characterization of the display device's color space. The ICC profiles provide a mapping between the device color space and either CIEXYZ or CIELAB. Although the format is quite new, I had access to a wide variety of profiles for scanners, monitors, and printers. The profiles are quite portable. The same profile can be used under the Macintosh(TM) operating system, Windows 95(TM), Solaris(TM), or SGI's Irix(TM) operating system. While porting to other platforms was outside the scope of this project, selecting a mechanism that would have been easy to port seemed a good plan.
The color management system under development at SGI supports the ICC profiles. It also provides a mechanism to find the profile for the workstation's display. It was a matter of a day's work to write a simple program, cmdecodegif, that adjusted the colors of a GIF image, once I had figured out how to embed an ICC profile. Embedding was done with a program called taggif.
Extending the GIF Format
To understand how the GIF file format was extended, it is necessary to understand in some detail the structure of a GIF file. The following description is much simplified, but suffices for the purposes of understanding this project. GIF files are designed to be able to serve a number of different purposes. The most common use is for a file to contain a single image, and a Color Table. A Color Table is a one dimensional array of three component entries. Each entry has a red, a green, and a blue component. The table provides the mapping between the index stored in the image and the device-dependent color space of the display device. A file may also contain multiple images. Each image may have its own Local Color Table, or it may default to use the Global Color Table. Within a single file, some images may use the Global Color Table and some their own Local Color Table. Further, it is possible to define a file that contains no images, but only sets the contents of the Global Color Table for use by subsequent files. Unfortunately, all that flexibility does come at the expense of increased code complexity.
A GIF file is composed of a series of typed blocks. Some blocks contain image data, some contain color table data, some control the interpretation of subsequent block or set state needed for processing other blocks. The sequencing of blocks is defined (in 1) by a grammar. The GIF grammar uses the following set of symbols:
<> a defined symbol in the grammar ::= defines a symbol * zero or more occurrences | an alternate element  an optional element
The grammar is then presented as follows:
<GIF Data Stream> :: = Header <Logical Screen> <Data>* Trailer <Logical Screen> ::= Logical Screen Descriptor [Global Color Table] <Data> ::= <Graphic Block> | <Special-Purpose Block> <Graphic Block> ::= [Graphic Control Extension] <Graphic-Rendering Block> <Graphic-Rendering Block> ::= <Table-Based Image> | Plain Text Extension <Table-Based Image> ::= Image Descriptor [Local Color Table] Image Data <Special Purpose Block> ::= Application Extension | Comment Extension
This grammar is actually fairly easy to read. For example, the first line may be interpreted as follows. A "GIF Data Stream" (or GIF file), is composed of a Header, followed by a Logical Screen, followed by zero or more instances of Data, followed by a Trailer. Each of the terminal symbols (Header, Trailer, Logical Screen Descriptor, Plain Text Extension, etc) denotes a different kind of block. The contents of each of the block types is defined elsewhere in the GIF specification.
Color Table blocks (both global and local) were the focal point for this project. The tagging tool needed a way to associate a device profile with each Color Table block. The adjustment tool needed to find the profile and the Color Table and be able to create a new adjusted Color Table. In fact, both tools are written as Unix filter programs which read in a GIF file and some command line arguments as inputs and write out a suitably modified GIF file as output.
The only user-definable block offered is the Application Extension. Different user-defined blocks are indicated by an Application Identifier which begins the Application Extension block. So that was where the embedded profile information had to be placed. Since this was only an experiment, I created an Application Identifier, the string "ICCRGBG1012", but did not register it with CompuServe. It was an unfortunate complication that the block had to be placed after a Global Color Table, but before a Local Color Table. Examining the grammar, the grammatical production that places an Application Extension block near a Global Color Table works as follows. Each line below is produced from the preceding by expanding one non-terminal symbol according to the rules of the grammar. (I've compressed spaces out of names to increase readability):
Header <Logical Screen> <Data>* Header LogicalScreenDescriptor GlobalColorTable <Data>* Header LogicalScreenDescriptor GlobalColorTable <SpecialPurposeBlock> <Data>* Header LogicalSreenDescriptor GlobalColorTable ApplicationExtension <Data>*
The Application Extension block containing our embedded profile immediately follows the Global Color Table. But examine the following production:
<Table-BasedImage> ::= ImageDescriptor [LocalColorTable] Image Data
If we place an Application Extension block containing a profile after the Table-Based Image, it not only follows the Local Color Table, which is convenient, but it also follows all the image data. In order to process any Color Table with an profile, both the color table and the profile must be buffered. If the profile followed the image data as well, that too would have to be buffered. But an image file can easily be a million bytes long. That is a lot of buffer space to require. It might be feasible on a Unix workstation, but seems ludicrous for a personal computer. The grammatical production that places the Application Extension as close as possible to a Local Color Table is:
<Data>* <Special Purpose Block> <Table-BasedImage> <Data>* ApplicationExtension ImageDescriptor LocalColorTable ImageData <Data>*
Even though this leads to an asymmetry in processing global and local color tables, it keeps the relevant data near at hand. The Image Descriptor is small and much easier to buffer than the Image Data.
Intervening in Image Display
At this point, I knew how to embed a profile into a GIF file, which solved the encoding problem. I also knew how to perform color management on a GIF file with an embedded image. The next problem was how to get that color adjustment to be invoked by Web browser. Fortunately, most of the mechanism was already in place. The solution described below works on SGI systems, and I believe it would work on any Unix system. Implementing it on other types of operating systems should be quite feasible, but the details may vary.
The first step to intervention is to ensure that all images for which colors are to be adjusted are referenced as HREF and not as IMG. As explained above, the Netscape browser automatically integrates images referenced as IMG, but those that are simply marked as HREF are processed according to the rules established by the mailcap mechanism. The Mosaic(TM) browser never integrates images and only processes according to the dictates of the mailcap mechanism. There is no mechanism provided within the Netscape browser to provide access to images if they are integrated in the main document window. So no color management can be performed. This is unfortunate, and should be corrected in a future version of the browser .
The mailcap file will consulted for all HREF references. In particular, this means that it will be consulted for the images that are to have their color adjusted. In theory, we could make use of the Unix pipe facility here. Assume that the color adjustment program is named "cmdecodegif," that it is a filter that inputs and outputs GIF files, and that SGI's standard image viewing program is named "imgview". We could simply create a mailcap file with the line:
which says that to display a GIF image, run the cmdecodegif program with the image file as input data, and then take the output of that and feed it as input to the imgview program. Unfortunately, the SGI imgview program cannot read its input from a Unix pipe. (This is probably a bug.) So this theoretically simple approach was not what I implemented. Instead, I needed first to write a simple Unix shell script. A shell script is a series of Unix commands which operate as if they were a single program. The entire shell script reads:
The only difference between this and the Unix pipe command described above is that the output of cmdecodegif is placed in a temporary file called "tdntest". This is sufficient for testing purposes, but a better method for generating temporary file names should be used. The shell script was named cmview.
Given cmview, the mailcap file I actually use contains the line:
That invokes the shell script that invokes the program cmdecodegif. That program reads the image that the browser passed to us, adjusts the color and writes the output into a temporary file. Then the regular image viewer, imgview, is used to display the color-adjusted image on the monitor.
An ICC device profile can be as small as 500 bytes for a minimal monitor profile. But it is not uncommon for the profile for a scanner to exceed 20000 bytes. While this is only 2 percent of a one megabyte file, which is not uncommon on Unix workstations, and still only 6 percent of a 640 by 480 full screen PC file, it could be a significant factor in small image files. What was needed was a way to maintain the benefits of sending a full ICC profile and not incur such an increase in file size.
The solution is simple. Translate each color in palette into a reference color space, place the translated palette in an Application Extension block, and do not send the ICC profile. The only reason for embedding the profile was to allow the adjusting tool to understand the color space of the device on which the image was created. If the color tables are available in a reference color space, we no longer care about the source device. The CIELAB data is as device-independent and portable as an ICC profile. We would be sending at most 256 CIELAB values. It seems that 8 bit CIELAB is sufficiently accurate, so this is only one byte times 3 components times 256 entries, or 768 bytes of data. While this is larger than the smallest monitor profile, it is much smaller than a typical scanner profile. Both techniques are easily supported in the same tagging and decoding tools. So we can pick whichever technique creates the smallest resulting image.
It is important for compatibility reasons that the translated colors be stored in an Application Extension block and not in the original color tables. Most of the existing GIF applications and image viewers do not support CIELAB, nor would there be a way to signal to them the color space of the GIF data. We have to use the Application Extension block if we do not wish to break compatibility.Results
Normally, a results section in a technical paper should be filled with measurements and quantified data. However, this project was more qualitative than quantitative in nature. It was not a goal to provide an assessment of how well color management works, or how closely the image on the display monitor could be brought to resemble the image on the source monitor or that being scanned. Those are tests of the quality of the underlying color management system and might be a fit subject for another research project. Indeed, it has doubtless been so.
Instead, this was a project assess the feasibility of providing color management in a loosely-coupled distributed system and to do so compatibly with a large base of installed software. The primary results are therefore a binary choice: either it is achievable or it is not. This paper has shown that it is feasible and how it can be achieved.
Nevertheless, I felt a strong desire to see whether the addition of color management to the Web would produce images that "looked better." Two factors were of particular concern: the relative scarcity of color calibrated monitors, and the fact that GIF images only have a palette of 256 colors.
Of course, the best results require monitor calibration. There are two ways to calibrate the monitor. The first is to generate an ICC profile that describes the current state of the monitor. One of the test monitors was a Barco Reference calibrator. Software provided with the monitor provided information such as the phosphor chromaticities, the white point, and a gamma curve for each color channel. >From this, it was easy to generate an ICC profile. The second approach to calibration is to bring the monitor's response into line with a stable ICC profile. This approach was used on a Sony Trinitron monitor, using a profile provided by Kodak and a photometer and software provided by Sequel Imaging.
The simplest test of color quality is to view the same image on two adjacent monitors. Consider one to be the reference and the other the target. If the target and reference are not both calibrated, the color adjusted images seem to vary as much from the reference as do the unadjusted images. This just shows how wide the variance in a monitor's color space can be over time. This reinforces our notion of the importance of calibration.
There is one significant exception to this, if the image was created on a scanner, color adjusted results are markedly improved, even on an uncalibrated monitor. I think this is because the scanner's color space is so different from any monitor's. Scanners usually work at a different white point (D65) and with a very different gamma than a monitor uses (9300K). Given the number of GIF images displayed on the Web that were scanned on desktop scanners, this suggests that color management may be often be worthwhile, even when we expect the display device to be an uncalibrated monitor.
Nevertheless, the clear conclusion to be drawn is that calibration of monitors is required for reliable and accurate color management. In particular, anyone considering selling products for which color is a major consideration in purchase decisions would be well advised to find an inexpensive and simple solution to the calibration problem. There are two aspects to this problem: finding an inexpensive calibration tool and encouraging users to perform the calibration. These are interrelated problems, however. Relatively inexpensive calibration tools are available now, especially those designed exclusively for monitor colorimetry. Strong market demand created by a large pool of users who want to calibrate their monitors would drive the vendors far enough out on the production curve that the price could be lowered further. Sufficiently inexpensive calibration tools and the long-anticipated arrival of interactive television services could foster the creation of online catalog programs that coaxed users to calibrate their display before making a final purchase decision. Given the high rate of return for catalog sales, and that color is a major factor in that return rate, online catalog vendors would have a strong incentive to promote calibration as a feature in their products. So a lower price might stimulate demand, and more demand might cause the price to drop.
Areas for Further Research
Displaying in line images with color management
As mentioned above, it would look much better if we could integrate color adjusted images, the way other images are displayed. This would require a fairly easy extension to the Netscape browser code. However, it would be a project for Netscape to undertake, not their customers.
Adjusting color for printed images
Web browsers do not use the mailcap facility for printing. So this solution will not work as it currently stands. However, most of the pieces in place for monitor color adjustment will be needed for printer color adjustment. The embedding of the source profile would work the same, as would the enhancement method of translating the color lookup table into CIELAB. So would the GIF file filter program cmdecodegif. The only change required is to find an opportunity to intervene in the printing process similar to that in the Web browsers.
Adjusting color for movies and video
Traditional color management systems are probably not appropriate for adjusting the color of video or MPEG movies. Most color management systems could not provide new frames fast enough to keep up with display rates. Processing the entire file before displaying any of it would solve that problem, but it would be slow and would use twice the disk storage space. Since movie files are often many megabytes long, this would create a problem.
Instead, a lightweight color space conversion would be needed. How to achieve this on personal computers seems problematic. The conversion requires a 3x3 matrix multiple for each pixel; if this is to be done on-the-fly, a substantial amount of processing power is needed. This is within the capabilities of modern graphics workstations, but might be impractical on most personal computers.
The PNG format mentioned above would support true 24 bit images and a compression algorithm that is public domain. The current draft specification and other information may be found at ftp.uu.net:/graphics/png. Again, the image format is RGB-based, but it includes colorimetric and gamma information intended to improve the ability to perform color management on source files.
The information provided is almost identical to the minimal required information for an ICC monitor profile. The only difference is that PNG only supports description of the monitor's tone reproduction as a single gamma value. The ICC supports several other options: a gamma value per color component, a linear slope per color component, or a color lookup table per component. The tone reproduction is a physical phenomenon and is only approximated by mathematical curves. Most gamma values are generated as a least squares fit against actual measured data. The ICC allows profile creators to provide the data instead of the approximation. Of course, there is a trade off between accuracy and the profile size.
It would be worthwhile to track the PNG specification and to experiment with color management of PNG images. Because the colorimetric data provided is so similar to that used in the ICC profiles, the main difference between color adjusted GIF images and color adjusted PNG images is going to be the difference between 8 and 24 bit images. More accurately, the difference is between 24 bit images created from a palette of 256 colors using appropriate error diffusion or other dithering techniques and those created from a palette of approximately 16 million colors using no dithering.
As mentioned above, the color quality of images can be traded against spatial resolution by using standard dithering techniques such as ordered dithering or error diffusion. Images with a small enough palette of colors will look the same. Depending on the content of the image, images with larger palettes may or may not be noticeably less detailed. For some purposes, the loss of detail will prohibit dithering. For others, the improvement in color accuracy will make dithering an attractive alternative.
Web pages often contain multiple images. All of these images will be displayed on one computer screen in one window. If that window uses an indexed color map, then all the pixels in all the palettes for all the images in the window must be allocated out of that same color map. It is not difficult to run out of color table space in an 8 bit window, even if no individual image uses more than 256 colors. Because this has been a problem, our WebMagic authoring tool offers a way to allocate a color cube within that color table and then use error diffusion to encode all the images for the page. Results are astonishingly good with only a four entry color cube. With a 64 entry cube, results are indistinguishable from the original for all but the most demanding images. Combining this approach with a color management solution to adjust the axes of the color cubes could produce even better results.
A Standard RGB Space
If a device color space could be adequately specified as a standard, then colors could be transferred in that space without the need to pass information describing that space with every image. Ralf Kuron, of FOGRA, has suggested that we define a standard RGB space for transferring GIF images. He proposes that this space be a representation of the "average" monitor.(2) The document creator would use a color management system to convert from the source device's color space into this standard RGB. If the person viewing the document has a calibrated monitor and CMS, then the image can be adjusted at display time. If not, the uncorrected image display should be "good enough," or at least it cannot be improved upon.
The color management industry has frequently searched for acceptable image interchange color spaces. To date, no consensus has been achieved. It is not clear whether the search for a standard RGB space would do better. Any standard choice of phosphor chromaticities, white point, and gamma will favor some vendors at the expense of others, because either no conversion, or only minimal conversions, would be needed. Also, given the extent and speed with which devices drift from calibration, it is quite possible that results would be no more accurate than they are today. Nevertheless, the idea of standardized image interchange color spaces does have great appeal and should be investigated further.
The project showed that it is possible to introduce color management into a distributed document production and viewing environment. Color management on the World Wide Web is feasible and can be added in a manner compatible with current tools. The GIF extension required a small amount of work. It can implemented to add only a very small amount of data per image. Because the processing is relatively lightweight, no noticeable performance impact was detected. Finally, and most importantly, colors look better.
(1) Graphics Interchange Format, Version 89a, CompuServe Incorporated, Columbus Ohio, 1989.
(2) Ralf Kuron, "Accurate Colors in Online Systems," unpublished draft, private communication, August 10, 1995.
I would like to thank Victor Reilly and Ashmeet Sidana for their assistance in this project.