JPEG

A photo of a flower compressed with successively more lossy compression ratios from left to right.
File name extension. A filename extension is a suffix to the name of a Computer file applied to indicate the encoding convention ( File format) of its contents jpg, . jpeg, . jpe. jif, . jfif, . jfi (containers)
Internet media typeimage/jpeg
Type codeJPEG
Uniform Type Identifierpublic. An Internet media type, originally called a MIME type after MIME and sometimes a Content-type after the name of a header in several protocols whose value A type code is the only mechanism used in pre- Mac OS X versions of the Macintosh Operating system to denote a file's format, in a manner similar A Uniform Type Identifier ( UTI) is a string defined by Apple Inc jpeg
Magic numberff d8
Developed byJoint Photographic Experts Group
Type of formatlossy image format

In computing, JPEG (pronounced JAY-peg; IPA: /ˈdʒeɪpɛɡ/) is a commonly used method of compression for photographic images. A file format is a particular way to encode information for storage in a Computer file. The Joint Photographic Experts Group is a joint committee between ISO and ITU-T (formerly CCITT which created the JPEG and JPEG 2000 standards A lossy compression method is one where compressing data and then decompressing it retrieves data that may well be different from the original but is close enough to be useful This is a comparison of Image file formats. General Ownership of the format and related information Computing is usually defined like the activity of using and developing Computer technology Computer hardware and software. Image compression is the application of Data compression on Digital images In effect the objective is to reduce redundancy of the image data in order to be able to The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10 to 1 compression with little perceivable loss in image quality.

In addition to being a compression method, JPEG is often considered to be a file format. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. Exchangeable image file format ( Exif) is a specification for the Image File format used by Digital cameras The specification uses the existing The JPEG File Interchange Format (JFIF is an image file format standard The World Wide Web (commonly shortened to the Web) is a system of interlinked Hypertext documents accessed via the Internet. This format variations are often not distinguished, and are simply called JPEG.

The MIME media type for JPEG is image/jpeg (defined in RFC 1341). An Internet media type, originally called a MIME type after MIME and sometimes a Content-type after the name of a header in several protocols whose value

## The JPEG standard

The name "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the standard. The Joint Photographic Experts Group is a joint committee between ISO and ITU-T (formerly CCITT which created the JPEG and JPEG 2000 standards The group was organized in 1986, issuing a standard in 1992, which was approved in 1994 as ISO 10918-1. Year 1986 ( MCMLXXXVI) was a Common year starting on Wednesday (link displays 1986 Gregorian calendar) Year 1992 ( MCMXCII) was a Leap year starting on Wednesday (link will display full 1992 Gregorian calendar) Year 1994 ( MCMXCIV) was a Common year starting on Saturday (link will display full 1994 Gregorian calendar) JPEG is distinct from MPEG (Moving Picture Experts Group), which produces compression schemes for video. The Moving Picture Experts Group, commonly referred to as simply MPEG, is a Working group of ISO / IEC charged with the development of video and

The JPEG standard specifies both the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, and the file format used to contain that stream. A codec is a device or program capable of encoding and/or decoding a Digital Data stream or signal. A byte (pronounced "bite" baɪt is the basic unit of measurement of information storage in Computer science.

## Recommended usage

The JPEG compression algorithm is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage in particular, where the bandwidth used by an image is important, JPEG is the ideal photographic image format.

On the other hand, JPEG is not as well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels cause noticeable artifacts. Such images are better saved in TIFF format (for local usage) or in GIF or PNG format (for web usage). Portable Network Graphics ( PNG) is a bitmapped image format that employs Lossless data compression.

JPEG is also not well suited to files that will undergo multiple edits, as some image quality will usually be lost each time the image is decompressed and recompressed (generation loss). Generation loss refers to the loss of quality and potential increase of file size between subsequent copies of data It is preferable to use a non-lossy format such as TIFF while working on an image, with the final image saved as JPEG after all editing is complete.

## JPEG compression

A chart showing the relative quality of various JPEG encoding settings and also compares saving a file as a JPEG normally and using Photoshop's "save for web" option

The compression method is usually lossy compression, meaning that some visual quality is lost in the process and cannot be restored. A lossy compression method is one where compressing data and then decompressing it retrieves data that may well be different from the original but is close enough to be useful There are variations on the standard baseline JPEG that are lossless, however these are not yet widely supported. The Joint Photographic Experts Group, in addition to their well-known Lossy image compression techniques JPEG and JPEG 2000, also have three standards

There is also an interlaced "Progressive JPEG" format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, progressive JPEGs are not as widely supported.

There are also many medical imaging systems that create and process 12-bit JPEG images. The 12-bit JPEG format has been part of the JPEG specification for some time, but again, this format is not as widely supported.

### Lossless editing

A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0).

Blocks can be rotated in 90 degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one.

The top and left of a JPEG image must lie on a block boundary, but the bottom and right need not do so. This limits the possible lossless crop operations, and also what flips and rotates can be performed on an image whose edges do not lie on a block boundary for all channels.

When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered relatively easily by anyone with a hex editor and an understanding of the format.

It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file.

## JPEG files

The file format is known as 'JPEG Interchange Format' (JIF), as specified in Annex B of the standard. A file format is a particular way to encode information for storage in a Computer file. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard:

• Color Space definition
• Component Sub-Sampling Registration definition
• Pixel Aspect Ratio definition

A couple of additional standards have evolved to address these issues. The first of these, released in 1992, was JPEG File Interchange Format (or JFIF), followed in recent years by Exchangeable image file format (Exif) and ICC color profiles. The JPEG File Interchange Format (JFIF is an image file format standard Exchangeable image file format ( Exif) is a specification for the Image File format used by Digital cameras The specification uses the existing The International Color Consortium was formed in 1993 by eight industry vendors in order to create a universal Color management system that would function transparently In Color management, an ICC profile is a set of data that characterizes a color input or output device or a Color space, according to standards promulgated by the

There is some confusion between the original 'JPEG Interchange Format' (JIF) and the similarly titled 'JPEG File Interchange Format' (JFIF). In some ways JFIF is a cutdown version of the JIF standard in that it specifies certain constraints (such as standard color space), while in other ways it is an extension of JIF due to the standard Application Segment header. The documentation for the original JFIF standard states:

JPEG File Interchange Format is a minimal file format which enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. This minimal format does not include any of the advanced features found in the TIFF JPEG specification or any application specific file format. Nor should it, for the only purpose of this simplified format is to allow the exchange of JPEG compressed images. [1]

Image files that employ JPEG compression are commonly called "JPEG files". Most image capture devices (such as digital cameras) and most image editing software programs that write to a "JPEG file" are actually creating a file in the JFIF and/or Exif format[2]. The JPEG File Interchange Format (JFIF is an image file format standard Exchangeable image file format ( Exif) is a specification for the Image File format used by Digital cameras The specification uses the existing

Strictly speaking, the JFIF and Exif standards are incompatible because they each specify that their header appears first. In practice, most JPEG files in Exif format contain a small JFIF header that precedes the Exif header. This allows older readers to correctly handle the older format JFIF header, while newer readers also decode the following Exif header.

### JPEG file extensions

The most common filename extensions for files employing JPEG compression are . A filename extension is a suffix to the name of a Computer file applied to indicate the encoding convention ( File format) of its contents jpg and . jpeg, though . jpe, . jfif and . jif are also used. It is also possible for JPEG data to be embedded in other file types - TIFF encoded files often embed a JPEG image as a thumbnail of the main image. Thumbnails are reduced-size versions of Pictures used to help in recognizing and organizing them serving the same role for images as a normal text index does for

### Color profile

Many JPEG files embed an ICC color profile (color space). A Color model is an abstract mathematical model describing the way Colors can be represented as Tuples of numbers typically as three or four values or color components Commonly used color profiles include sRGB and Adobe RGB. sRGB is a standard RGB (Red Green Blue color space created cooperatively by HP and Microsoft for use on monitors printers and the Internet The Adobe RGB color space is an RGB color space developed by Adobe Systems in 1998. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops. Dynamic range is a term used frequently in numerous fields to describe the Ratio between the smallest and largest possible values of a changeable quantity such as in Sound

However, a large number of applications are not able to deal with JPEG color profiles and simply ignore them. (eg: The GIMP and all web browsers, excluding Apple Safari).

## Syntax and structure

A JPEG image contains a sequence of markers, each of which begins with a 0xFF byte followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes; others are followed by two bytes indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker. ) Some markers are followed by entropy-coded data; the length of such a marker does not include the entropy-coded data.

Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended. Decoders must skip this 0x00 byte. This technique, called byte stuffing, is only applied to the entropy-coded data, not to marker payload data.

Common JPEG markers
SOI0xFFD8noneStart Of Image
SOF00xFFC0variable sizeStart Of Frame (Baseline DCT)Indicates that this is a baseline DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e. g. , 4:2:0).
SOF20xFFC2variable sizeStart Of Frame (Progressive DCT)Indicates that this is a progressive DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e. g. , 4:2:0).
DHT0xFFC4variable sizeDefine Huffman Table(s)Specifies one or more huffman tables.
DQT0xFFDBvariable sizeDefine Quantization Table(s)Specifies one or more quantization tables.
DRI0xFFDD2 bytesDefine Restart IntervalSpecifies the interval between RSTn markers, in macroblocks.
SOS0xFFDAvariable sizeStart Of ScanBegins a top-to-bottom scan of the image. In baseline DCT JPEG images, there is generally a single scan. Progressive DCT JPEG images usually contain multiple scans. This marker specifies which slice of data it will contain, and is immediately followed by entropy-coded data.
RSTn0xFFDnvariable sizeRestartInserted every r macroblocks, where r is the restart interval set by a DRI marker. Not used if there was no DRI marker. n, the low 4 bits of the marker code, cycles from 0 to 7.
APPn0xFFEnvariable sizeApplication-specificFor example, an Exif JPEG file uses an APP1 marker to store metadata, laid out in a structure based closely on TIFF. Exchangeable image file format ( Exif) is a specification for the Image File format used by Digital cameras The specification uses the existing
COM0xFFFEvariable sizeCommentContains a text comment.
EOI0xFFD9noneEnd Of Image

[3]

There are other Start Of Frame markers that introduce other kinds of JPEG.

Since several vendors might use the same APPn marker type, application-specific markers often begin with a standard or vendor name (e. g. , "Exif" or "Adobe") or some other identifying string.

At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel.

## JPEG codec example

Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps:

1. The representation of the colors in the image is converted from RGB to YCbCr, consisting of one luma component (Y), representing brightness, and two chroma components, (Cb and Cr), representing color. YCbCr or Y'CbCr is a family of Color spaces used as a part of the Color image pipeline in Video and Digital photography systems As applied to video signals luma represents the brightness in an image (the "black and white" or achromatic portion of the image Chrominance ( chroma for short is the signal used in many Video systems to carry the color information of the picture separately from the accompanying luma This step is sometimes skipped.
2. The resolution of the chroma data is reduced, usually by a factor 2. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details.
3. The image is split into blocks of 8×8 pixels, and for each block, each of the Y, Cb, and Cr data undergoes a discrete cosine transform (DCT). A discrete cosine transform ( DCT) expresses a sequence of finitely many data points in terms of a sum of Cosine functions oscillating at different frequencies A DCT is similar to a Fourier transform in the sense that it produces a kind of spatial frequency spectrum. This article specifically discusses Fourier transformation of functions on the Real line; for other kinds of Fourier transformation see Fourier analysis and
4. The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50% or 95%) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether.
5. The resulting data for all 8×8 blocks is further compressed with a loss-less algorithm, a variant of Huffman encoding. History In 1951 David A Huffman and his MIT information theory classmates were given

The decoding process reverses these steps. In the remainder of this section, the encoding and decoding processes are described in more detail.

### Encoding

Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). Color depth. or bit depth, is a Computer graphics term describing the number of Bits used to represent the Color of a single Pixel This particular option is a lossy data compression method. A lossy compression method is one where compressing data and then decompressing it retrieves data that may well be different from the original but is close enough to be useful

#### Color space transformation

First, the image should be converted from RGB into a different color space called YCbCr. A Color model is an abstract mathematical model describing the way Colors can be represented as Tuples of numbers typically as three or four values or color components YCbCr or Y'CbCr is a family of Color spaces used as a part of the Color image pipeline in Video and Digital photography systems It has three components Y, Cb and Cr: the Y component represents the brightness of a pixel, the Cb and Cr components represent the chrominance (split into blue and red components). Chrominance ( chroma for short is the signal used in many Video systems to carry the color information of the picture separately from the accompanying luma This is the same color space as used by digital color television as well as digital video including video DVDs, and is similar to the way color is represented in analog PAL video and MAC but not by analog NTSC, which uses the YIQ color space. Digital television (DTV is the sending and receiving of moving images and sound by discrete ( digital) signals in contrast to the analog signals used by DVD-Video is a consumer video format used to store digital video on DVD (DVD-ROM discs and is currently the dominant form of consumer video formats in the United PAL, short for Phase Alternating Line, is a colour -encoding system used in Broadcast television systems in large parts of the world Multiplexed Analogue Components (MAC was a Satellite television transmission standard originally proposed for use on a Europe-wide terrestrial HDTV system although it was NTSC ( National Television System Committee) is the Analog television system used in the United States, Canada, Japan, Mexico YIQ is the Color space used by the NTSC color TV system employed mainly in North and Central America, and Japan. The YCbCr color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient as the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel, more closely representing the human visual system.

This conversion to YCbCr is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the RGB color model, where the image is stored in separate channels for red, green and blue luminance. This results in less efficient compression, and would not likely be used if file size were an issue.

#### Downsampling

Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y component) than in the color of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently.

The transformation into the YCbCr color model enables the next step, which is to reduce the spatial resolution of the Cb and Cr components (called "downsampling" or "chroma subsampling"). YCbCr or Y'CbCr is a family of Color spaces used as a part of the Color image pipeline in Video and Digital photography systems In signal processing downsampling (or "subsampling" is the process of reducing the sampling rate of a signal. Chroma subsampling is the practice of encoding images by implementing less resolution for chroma Information than for luma information The ratios at which the downsampling can be done on JPEG are 4:4:4 (no downsampling), 4:2:2 (reduce by factor of 2 in horizontal direction), and most commonly 4:2:0 (reduce by factor of 2 in horizontal and vertical directions). Chroma subsampling is the practice of encoding images by implementing less resolution for chroma Information than for luma information Chroma subsampling is the practice of encoding images by implementing less resolution for chroma Information than for luma information Chroma subsampling is the practice of encoding images by implementing less resolution for chroma Information than for luma information For the rest of the compression process, Y, Cb and Cr are processed separately and in a very similar manner. Downsampling the chroma components saves 33% or 50% of the space taken by the image without drastically affecting perceptual image quality.

#### Block splitting

After subsampling, each channel must be split into 8×8 blocks (of pixels). Chroma subsampling is the practice of encoding images by implementing less resolution for chroma Information than for luma information Color digital images are made of Pixels and pixels are made of combinations of Primary colors A channel in this context is the grayscale image of the same If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data:

• filling the edge pixels with a fixed color (typically black) creates dark artifacts along the visible part of the border
• repeating the edge pixels is a common but non-optimal technique that avoids the visible border, but it still creates artifacts with the colorimetry of the filled cells
• a better strategy is to fill pixels using colors that preserve the DCT coefficients of the visible pixels, at least for the low frequency ones (for example filling with the average color of the visible part will preserve the first DC coefficient, but best fitting the next two AC coefficients will produce much better results with less visible 8×8 cell edges along the border).

#### Discrete cosine transform

The 8×8 sub-image shown in 8-bit greyscale

Next, each component (Y, Cb, Cr) of each 8×8 block is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT). Frequency domain is a term used to describe the analysis of Mathematical functions or signals with respect to frequency A discrete cosine transform ( DCT) expresses a sequence of finitely many data points in terms of a sum of Cosine functions oscillating at different frequencies

As an example, one such 8×8 8-bit subimage might be:

$\begin{bmatrix} 52 & 55 & 61 & 66 & 70 & 61 & 64 & 73 \\ 63 & 59 & 55 & 90 & 109 & 85 & 69 & 72 \\ 62 & 59 & 68 & 113 & 144 & 104 & 66 & 73 \\ 63 & 58 & 71 & 122 & 154 & 106 & 70 & 69 \\ 67 & 61 & 68 & 104 & 126 & 88 & 68 & 70 \\ 79 & 65 & 60 & 70 & 77 & 68 & 58 & 75 \\ 85 & 71 & 64 & 59 & 55 & 61 & 65 & 83 \\ 87 & 79 & 69 & 68 & 65 & 76 & 78 & 94\end{bmatrix}$

Before computing the DCT of the subimage, its grey values are shifted from a positive range to one centered around zero. For an 8-bit image each pixel has 256 possible values: [0,255]. To center around zero it is necessary to subtract by half the number of possible values, or 128.

$\frac{2^{bit}}{2} = \frac{2^8}{2} = 2^7 = 128$

Subtracting 128 from each pixel value yields pixel values on [ − 128,127]

$\begin{array}{c}x \\\longrightarrow \\\begin{bmatrix} -76 & -73 & -67 & -62 & -58 & -67 & -64 & -55 \\ -65 & -69 & -73 & -38 & -19 & -43 & -59 & -56 \\ -66 & -69 & -60 & -15 & 16 & -24 & -62 & -55 \\ -65 & -70 & -57 & -6 & 26 & -22 & -58 & -59 \\ -61 & -67 & -60 & -24 & -2 & -40 & -60 & -58 \\ -49 & -63 & -68 & -58 & -51 & -60 & -70 & -53 \\ -43 & -57 & -64 & -69 & -73 & -67 & -63 & -45 \\ -41 & -49 & -59 & -60 & -63 & -52 & -50 & -34\end{bmatrix}\end{array}\Bigg\downarrow y$

The next step is to take the two-dimensional DCT, which is given by:

The DCT transforms 64 pixels to a linear combination of these 64 squares. In Mathematics, linear combinations are a concept central to Linear algebra and related fields of mathematics Horizontally is u and vertically is v.
$\ G_{u,v} = \alpha(u) \alpha(v) \sum_{x=0}^7 \sum_{y=0}^7 g_{x,y} \cos \left[\frac{\pi}{8} \left(x+\frac{1}{2}\right) u \right] \cos \left[\frac{\pi}{8} \left(y+\frac{1}{2}\right) v \right]$

where

• $\ u$ is the horizontal spatial frequency, for the integers $\ 0 \leq u < 8$. In Mathematics, Physics, and Engineering, spatial frequency is a characteristic of any structure that is periodic across position in space
• $\ v$ is the vertical spatial frequency, for the integers $\ 0 \leq v < 8$.
• $\alpha_p(n) = \begin{cases} \sqrt{ \frac{1}{8} }, & \mbox{if }n=0 \\ \sqrt{ \frac{2}{8} }, & \mbox{otherwise}\end{cases}$ is a normalizing function
• $\ g_{x,y}$ is the pixel value at coordinates $\ (x,y)$
• $\ G_{u,v}$ is the DCT coefficient at coordinates $\ (u,v)$

If we perform this transformation on our matrix above, and then round to the nearest integer, we get

$\begin{array}{c}u \\\longrightarrow \\\begin{bmatrix} -415 & -30 & -61 & 27 & 56 & -20 & -2 & 0 \\ 4 & -22 & -61 & 10 & 13 & -7 & -9 & 5 \\ -47 & 7 & 77 & -25 & -29 & 10 & 5 & -6 \\ -49 & 12 & 34 & -15 & -10 & 6 & 2 & 2 \\ 12 & -7 & -13 & -4 & -2 & 2 & -3 & 3 \\ -8 & 3 & 2 & -6 & -2 & 1 & 4 & 2 \\ -1 & 0 & 0 & -2 & -1 & -3 & 4 & -1 \\ 0 & 0 & -1 & -4 & -1 & 0 & 1 & 2\end{bmatrix}\end{array}\Bigg\downarrow v$

Note the rather large value of the top-left corner. This is the DC coefficient. The remaining 63 coefficients are called the AC coefficients. The DCT temporarily increases the bit-depth of the image, since the DCT coefficients of an 8-bit/component image take up to 11 or 12 bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit bins to hold these coefficients doubling the formal size of the image representation at this point. The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the size of the DCT coefficients to 8 bits or less, resulting in a signal with a large trailing region containing zeros that the entropy stage can simply throw away. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, because typically only a very small part of the image is stored in full DCT form at any given time during the encoding or decoding process.

#### Quantization

The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. Brightness is an attribute of Visual perception in which a source appears to emit or reflect a given amount of Light. This fact allows one to get away with greatly reducing the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to store.

A typical quantization matrix, as specified in the original JPEG Standard[4], is as follows:

$\begin{bmatrix} 16 & 11 & 10 & 16 & 24 & 40 & 51 & 61 \\ 12 & 12 & 14 & 19 & 26 & 58 & 60 & 55 \\ 14 & 13 & 16 & 24 & 40 & 57 & 69 & 56 \\ 14 & 17 & 22 & 29 & 51 & 87 & 80 & 62 \\ 18 & 22 & 37 & 56 & 68 & 109 & 103 & 77 \\ 24 & 35 & 55 & 64 & 81 & 104 & 113 & 92 \\ 49 & 64 & 78 & 87 & 103 & 121 & 120 & 101 \\ 72 & 92 & 95 & 98 & 112 & 100 & 103 & 99\end{bmatrix}$

The quantized DCT coefficients are computed with

$B_{j,k} = \mathrm{round} \left( \frac{A_{j,k}}{Q_{j,k}} \right) \mbox{ for } j=0,1,2,\cdots,N_1-1; k=0,1,2,\cdots,N_2-1$

where A is the unquantized DCT coefficients; Q is the quantization matrix above; and B is the quantized DCT coefficients. Quantization, involved in Image processing, is a Lossy compression technique achieved by compressing a range of values to a single quantum value (Note that this is in no way matrix multiplication. In Mathematics, matrix multiplication is the operation of multiplying a matrix with either a scalar or another matrix )

Using this quantization matrix with the DCT coefficient matrix from above results in:

$\begin{bmatrix} -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\ 0 & -2 & -4 & 1 & 1 & 0 & 0 & 0 \\ -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\ -4 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$

For example, using −415 (the DC coefficient) and rounding to the nearest integer

$\mathrm{round}\left( \frac{-415}{16}\right)=\mathrm{round}\left( -25.9375\right)=-26$

#### Entropy coding

Main article: Entropy encoding
Zigzag ordering of JPEG image components

Entropy coding is a special form of lossless data compression. In Information theory an entropy encoding is a lossless Data compression scheme that is independent of the specific characteristics of the medium Lossless data compression is a class of Data compression Algorithms that allows the exact original data to be reconstructed from the compressed data It involves arranging the image components in a "zigzag" order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left. Run-length encoding ( RLE) is a very simple form of Data compression in which runs of data (that is sequences in which the same data value occurs in many History In 1951 David A Huffman and his MIT information theory classmates were given

The JPEG standard also allows, but does not require, the use of arithmetic coding, which is mathematically superior to Huffman coding. Arithmetic coding is a method for Lossless data compression. Normally a string of characters such as the words "hello there" is represented using a fixed number of However, this feature is rarely used as it is covered by patents and because it is much slower to encode and decode compared to Huffman coding. A patent is a set of Exclusive rights granted by a State to an inventor or his assignee for a fixed period of time in exchange for a disclosure of an Arithmetic coding typically makes files about 5% smaller.

The zigzag sequence for the above quantized coefficients are shown below. (The format shown is just for ease of understanding/viewing. )

 −26 −3 0 −3 −2 −6 2 −4 1 −4 1 1 5 1 2 −1 1 −1 2 0 0 0 0 0 −1 −1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

If the i-th block is represented by Bi and positions within each block are represented by (p,q) where p = 0, 1, . . . , 7 and q = 0, 1, . . . , 7, then any coefficient in the DCT image can be represented as Bi(p,q). Thus, in the above scheme, the order of encoding pixels (for the i-th block) is Bi(0,0), Bi(0,1), Bi(1,0), Bi(2,0), Bi(1,1), Bi(0,2), Bi(0,3), Bi(1,2) and so on.

Baseline sequential JPEG encoding and decoding processes

This encoding mode is called baseline sequential encoding. Baseline JPEG also supports progressive encoding. While sequential encoding encodes coefficients of a single block at a time (in a zigzag manner), progressive encoding encodes similar-positioned coefficients of all blocks in one go, followed by the next positioned coefficients of all blocks, and so on. So, if the image is divided into N 8×8 blocks {B0,B1,B2, . . . , Bn-1}, then progressive encoding encodes Bi(0,0) for all blocks, i. e. , for all i = 0, 1, 2, . . . , N-1. This is followed by encoding Bi(0,1) coefficient of all blocks, followed by Bi(1,0)-th coefficient of all blocks, then Bi(0,2)-th coefficient of all blocks, and so on. It should be noted here that once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that Baseline Progressive JPEG encoding usually gives better compression as compared to Baseline Sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large.

In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode.

In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. JPEG has a special Huffman code word for ending the sequence prematurely when the remaining coefficients are zero.

Using this special code word: "EOB", the sequence becomes:

 −26 −3 0 −3 −2 −6 2 −4 1 −4 1 1 5 1 2 −1 1 −1 2 0 0 0 0 0 −1 −1 EOB

JPEG's other code words represent combinations of (a) the number of significant bits of a coefficient, including sign, and (b) the number of consecutive zero coefficients that follow it. (Once you know how many bits to expect, it takes 1 bit to represent the choices {-1, +1}, 2 bits to represent the choices {-3, -2, +2, +3}, and so forth. ) In our example block, most of the quantized coefficients are small numbers that are not followed immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words.

The JPEG standard provides general-purpose Huffman tables; encoders may also choose to generate Huffman tables optimized for the actual frequency distributions in images being encoded.

### Compression ratio and artifacts

This image shows the pixels that are different between a non-compressed image and the same image JPEG compressed with a quality of 50%. Darker means a larger difference. Note especially the changes occurring near sharp edges and having a block-like shape.
The compressed 8×8-squares are visible in the scaled up picture, together with other visual artifacts of the lossy compression. A lossy compression method is one where compressing data and then decompressing it retrieves data that may well be different from the original but is close enough to be useful

The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. 100 to one compression is usually possible, but will look distinctly artifacted compared to the original. A compression artifact (or artefact) is the result of an aggressive Data compression scheme applied to an Image, audio, or Video The appropriate level of compression depends on the use to which the image will be put.

Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG images. The World Wide Web (commonly shortened to the Web) is a system of interlinked Hypertext documents accessed via the Internet. A compression artifact (or artefact) is the result of an aggressive Data compression scheme applied to an Image, audio, or Video These are due to the quantization step of the JPEG algorithm. They are especially noticeable around eyes in pictures of faces. They can be reduced by choosing a lower level of compression; they may be eliminated by saving an image using a lossless file format, though for photographic images this will usually result in a larger file size. Image compression is the application of Data compression on Digital images In effect the objective is to reduce redundancy of the image data in order to be able to Lossless data compression is a class of Data compression Algorithms that allows the exact original data to be reconstructed from the compressed data Compression artifacts make low-quality JPEGs unacceptable for storing heightmaps. In Computer graphics, a heightmap or heightfield is a raster image used to store values such as surface Elevation data for display in The images created with ray-tracing programs have noticeable blocky shapes on the terrain.

Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality.

Since the quantization stage always results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers. ) Even if the quantization matrix is a matrix of ones, information will still be lost in the rounding step. In Mathematics, a matrix of ones is a matrix where every element is equal to one

### Decoding

Decoding to display the image consists of doing all the above in reverse.

Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in)

$\begin{bmatrix} -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\ 0 & -2 & -4 & 1 & 1 & 0 & 0 & 0 \\ -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\ -4 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$

and taking the entry-for-entry product with the quantization matrix from above results in

$\begin{bmatrix} -416 & -33 & -60 & 32 & 48 & -40 & 0 & 0 \\ 0 & -24 & -56 & 19 & 26 & 0 & 0 & 0 \\ -42 & 13 & 80 & -24 & -40 & 0 & 0 & 0 \\ -56 & 17 & 44 & -29 & 0 & 0 & 0 & 0 \\ 18 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$

which closely resembles the original DCT coefficient matrix for the top-left portion. In Mathematics, matrix multiplication is the operation of multiplying a matrix with either a scalar or another matrix Taking the inverse DCT (type-III DCT) results in an image with values (still shifted down by 128)

Notice the slight differences between the original (top) and decompressed image (bottom), which is most readily seen in the bottom-left corner.
$\begin{bmatrix} -68 & -65 & -73 & -70 & -58 & -67 & -70 & -48 \\ -70 & -72 & -72 & -45 & -20 & -40 & -65 & -57 \\ -68 & -76 & -66 & -15 & 22 & -12 & -58 & -61 \\ -62 & -72 & -60 & -6 & 28 & -12 & -59 & -56 \\ -59 & -66 & -63 & -28 & -8 & -42 & -69 & -52 \\ -60 & -60 & -67 & -60 & -50 & -68 & -75 & -50 \\ -54 & -46 & -61 & -74 & -65 & -64 & -63 & -45 \\ -45 & -32 & -51 & -72 & -58 & -45 & -45 & -39\end{bmatrix}$

and adding 128 to each entry

$\begin{bmatrix} 60 & 63 & 55 & 58 & 70 & 61 & 58 & 80 \\ 58 & 56 & 56 & 83 & 108 & 88 & 63 & 71 \\ 60 & 52 & 62 & 113 & 150 & 116 & 70 & 67 \\ 66 & 56 & 68 & 122 & 156 & 116 & 69 & 72 \\ 69 & 62 & 65 & 100 & 120 & 86 & 59 & 76 \\ 68 & 68 & 61 & 68 & 78 & 60 & 53 & 78 \\ 74 & 82 & 67 & 54 & 63 & 64 & 65 & 83 \\ 83 & 96 & 77 & 56 & 70 & 83 & 83 & 89\end{bmatrix}$

This is the uncompressed subimage and can be compared to the original subimage (also see images to the right) by taking the difference (original − uncompressed) results in error values

$\begin{bmatrix} -8 & -8 & 6 & 8 & 0 & 0 & 6 & -7 \\ 5 & 3 & -1 & 7 & 1 & -3 & 6 & 1 \\ 2 & 7 & 6 & 0 & -6 & -12 & -4 & 6 \\ -3 & 2 & 3 & 0 & -2 & -10 & 1 & -3 \\ -2 & -1 & 3 & 4 & 6 & 2 & 9 & -6 \\ 11 & -3 & -1 & 2 & -1 & 8 & 5 & -3 \\ 11 & -11 & -3 & 5 & -8 & -3 & 0 & 0 \\ 4 & -17 & -8 & 12 & -5 & -7 & -5 & 5\end{bmatrix}$

with an average absolute error of about 5 values per pixels (i. e. , $\frac{1}{64} \sum_{x=1}^8 \sum_{y=1}^8 |e(x,y)| = 4.8125$).

The error is most noticeable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right.

### Required precision

The JPEG encoding does not fix the precision needed for the output compressed image. On the contrary, the JPEG standard (as well as the derived MPEG standards) have very strict precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed:

• a maximum 1 bit of difference for each pixel component
• low mean square error over each 8×8-pixel block
• very low mean error over each 8×8-pixel block
• very low mean square error over the whole image
• extremely low mean error over the whole image

These assertions are tested on a large set of randomized input images, to handle the worst cases. Look at the IEEE 1880-1990 standard for reference. This has a consequence on the implementation of decoders, and it is extremely critical because some encoding processes (notably used for encoding sequences of images like MPEG) need to be able to construct, on the encoder side, a reference decoded image. In order to support 8-bit precision per pixel component output, dequantization and inverse DCT transforms are typically implemented with at least 14-bit precision in optimized decoders.

## Effects of JPEG compression

JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios. Notice how a higher compression ratio first affects the high-frequency textures in the upper-left corner of the image, and how the contrasting lines become more fuzzy. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for a human eye) than the precision of contours (based on luminance). This justifies the fact that images should be first transformed in a color model separating the luminance from the chromatic information, before subsampling the chromatic planes (which may also use lower quality quantization) in order to preserve the precision of the luminance plane with more information bits.

### Sample photographs

For information, the uncompressed 24-bit RGB bitmap image below (73,242 pixels) would require 219,726 bytes (excluding all other information headers). The filesizes indicated below include the internal JPEG information headers and some meta-data. For full quality images (Q=100), about 8. 25 bits per color pixel is required. On grayscaled images, a minimum of 6. 5 bits per pixel is enough (a comparable Q=100 quality color information requires about 25% more encoded bits). The full quality image below (Q=100) is encoded at 9 bits per color pixel, the medium quality image (Q=25) uses 1 bit per color pixel. For most applications, the quality factor should not go below 0. 75 bit per pixel (Q=12. 5), as demonstrated by the low quality image. The image at lowest quality uses only 0. 13 bit per pixel, and displays very poor color, it could only be usable after subsampling to a much lower display size.

NOTE: The above images are not IEEE / CCIR / EBU test images, and the encoder settings are not specified or available. The Institute of Electrical and Electronics Engineers or IEEE (read eye-triple-e) is an international Non-profit, professional organization The ITU Radiocommunication Sector ( ITU-R) is one of the three sectors (divisions or units of the International Telecommunication Union (ITU and is responsible for The European Broadcasting Union ( EBU; L'Union Européenne de Radio-Télévision ("UER" and unrelated to the European Union) is a confederation A standard test image is a digital image file used across different institutions to test Image processing and Image compression algorithms
ImageQualitySize (bytes)Comment
Full quality (Q = 100)83,261Extremely minor artifacts
Average quality (Q = 50)15,138Initial signs of subimage artifacts
Medium quality (Q = 25)9,553Stronger artifacts; loss of high resolution information
Low quality (Q = 10)4,787Severe high frequency loss; artifacts on subimage boundaries ("macroblocking") are obvious
Lowest quality (Q = 1)1,523Extreme loss of color and detail; the leaves are nearly unrecognizable

The mid-quality photo uses only one sixth the storage space but has little noticeable loss of detail or visible artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on rate distortion theory for a mathematical explanation of this threshold effect. Rate–distortion theory is a major branch of Information theory which provides the theoretical foundations for Lossy data compression; it addresses the problem of

## Potential patent issues

In 2002 Forgent Networks asserted that it owned and would enforce patent rights on the JPEG technology, arising from a patent that had been filed on October 27, 1986, and granted on October 6, 1987 (U.S. Patent 4,698,672 ). See also 2002 (disambiguation Year 2002 ( MMII) was a Common year starting on Tuesday of the Gregorian calendar. Asure Software ( is a Software company which has licensing as its primary revenue source A patent is a set of Exclusive rights granted by a State to an inventor or his assignee for a fixed period of time in exchange for a disclosure of an Events 312 - Constantine the Great is said to have received his famous Vision of the Cross. Year 1986 ( MCMLXXXVI) was a Common year starting on Wednesday (link displays 1986 Gregorian calendar) Events 105 BC - Battle of Arausio: The Cimbri inflict the heaviest defeat on the Roman army of Gnaeus Mallius Maximus Year 1987 ( MCMLXXXVII) was a Common year starting on Thursday (link displays 1987 Gregorian calendar) The announcement created a furor reminiscent of Unisys' attempts to assert its rights over the GIF image compression standard. Unisys Corporation ( based in Blue Bell, Pennsylvania, United States, and incorporated in Delaware, is a global provider of information technology