Iris Recognition: An Emerging Biometric Technology


Download Iris Recognition: An Emerging Biometric Technology


Preview text

Iris Recognition: An Emerging Biometric Technology
RICHARD P. WILDES, MEMBER, IEEE

This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular, the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.
Keywords—Biometrics, iris recognition, machine vision, object recognition, pattern recognition.
I. INTRODUCTION
A. Motivation
Technologies that exploit biometrics have the potential for application to the identification and verification of individuals for controlling access to secured areas or materials.1 A wide variety of biometrics have been marshaled in support of this challenge. Resulting systems include those based on automated recognition of retinal vasculature, fingerprints, hand shape, handwritten signature, and voice [24], [40]. Provided a highly cooperative operator, these approaches have the potential to provide acceptable performance. Unfortunately, from the human factors point of view, these methods are highly invasive: Typically, the operator is required to make physical contact with a sensing device or otherwise take some special action (e.g., recite a specific phonemic sequence). Similarly, there is little potential for covert evaluation. One possible alternative to these methods that has the potential to be less invasive is automated face recognition. However, while automated face recognition is a topic of active research, the inherent
Manuscript received October 31, 1996; revised February 15, 1997. This work was supported in part by The Sarnoff Corporation and in part by The National Information Display Laboratory.
The author is with The Sarnoff Corporation, Princeton, NJ 08543-5300. Publisher Item Identifier S 0018-9219(97)06634-6. 1 Throughout this discussion, the term “verification” will refer to recognition relative to a specified data base entry. The term “identification” will refer to recognition relative to a larger set of alternative entries.

difficulty of the problem might prevent widely applicable technologies from appearing in the near term [9], [45]. Automated iris recognition is yet another alternative for noninvasive verification and identification of people. Interestingly, the spatial patterns that are apparent in the human iris are highly distinctive to an individual [1], [34] (see, e.g., Fig. 1). Like the face, the iris is an overt body that is available for remote (i.e., noninvasive) assessment. Unlike the human face, however, the variability in appearance of any one iris might be well enough constrained to make possible an automated recognition system based on currently available machine vision technologies.
B. Background
The word iris dates from classical times ( , a rainbow). As applied to the colored portion of the exterior eye, iris seems to date to the sixteenth century and was taken to denote this structure’s variegated appearance [50]. More technically, the iris is part of the uveal, or middle, coat of the eye. It is a thin diaphragm stretching across the anterior portion of the eye and supported by the lens (see Fig. 2). This support gives it the shape of a truncated cone in three dimensions. At its base, the iris is attached to the eye’s cilliary body. At the opposite end, it opens into the pupil, typically slightly to the nasal side and below center. The cornea lies in front of the iris and provides a transparent protective covering.
To appreciate the richness of the iris as a pattern for recognition, it is useful to consider its structure in a bit more detail. The iris is composed of several layers. Its posterior surface consists of heavily pigmented epithelial cells that make it light tight (i.e., impenetrable by light). Anterior to this layer are two cooperative muscles for controlling the pupil. Next is the stromal layer, consisting of collagenous connective tissue in arch-like processes. Coursing through this layer are radially arranged corkscrewlike blood vessels. The most anterior layer is the anterior border layer, differing from the stroma in being more densely packed, especially with individual pigment cells called chromataphores. The visual appearance of the iris is a direct result of its multilayered structure. The anterior surface of the iris is seen to be divided into a

0018–9219/97$10.00 © 1997 IEEE

1348

PROCEEDINGS OF THE IEEE, VOL. 85, NO. 9, SEPTEMBER 1997

(a)
(b) Fig. 1. The distinctiveness of the human iris. The two panels show images of the left iris of two individuals. Even to casual inspection, the imaged patterns in the two irises are markedly different.
central pupillary zone and a surrounding cilliary zone. The border of these two areas is termed the collarette; it appears as a zigzag circumferential ridge resulting as the anterior border layer ends abruptly near the pupil. The cilliary zone contains many interlacing ridges resulting from stromal support. Contractile lines here can vary with the state of the pupil. Additional meridional striations result from the radiating vasculature. Other assorted variations in appearance owe to crypts (irregular atrophy of the border layer), nevi (small elevations of the border layer), and
WILDES: IRIS RECOGNITION

freckles (local collections of chromataphores). In contrast, the pupillary zone can be relatively flat. However, it often shows radiating spoke-like processes and a pigment frill where the posterior layer’s heavily pigmented tissue shows at the pupil boundary. Last, iris color results from the differential absorption of light impinging on the pigmented cells in the anterior border layer. When there is little pigmentation in the anterior border layer, light reflects back from the posterior epithelium and is scattered as it passes through the stroma to yield a blue appearance. Progressive levels of anterior pigmentation lead to darker colored irises. Additional details of iris structure can be found in the biomedical literature (e.g., [1], [16]).
Claims that the structure of the iris is unique to an individual and is stable with age come from two main sources. The first source of evidence is clinical observations. During the course of examining large numbers of eyes, ophthalmologists [20] and anatomists [1] have noted that the detailed pattern of an iris, even the left and right iris of a single person, seems to be highly distinctive. Further, in cases with repeated observations, the patterns seem to vary little, at least past childhood. The second source of evidence is developmental biology [35], [38]. There, one finds that while the general structure of the iris is genetically determined, the particulars of its minutiae are critically dependent on circumstances (e.g., the initial conditions in the embryonic precursor to the iris). Therefore, they are highly unlikely to be replicated via the natural course of events. Rarely, the developmental process goes awry, yielding only a rudimentary iris (aniridia) or a marked displacement (corectopia) or shape distortion (colobloma) of the pupil [35], [42]. Developmental evidence also bears on issues of stability with age. Certain parts of the iris (e.g., the vasculature) are largely in place at birth, whereas others (e.g., the musculature) mature around two years of age [1], [35]. Of particular significance for the purposes of recognition is the fact that pigmentation patterning continues until adolescence [1], [43], [51]. Also, the average pupil size (for an individual) increases slightly until adolescence [1]. Following adolescence, the healthy iris varies little for the rest of a person’s life, although slight depigmentation and shrinking of the average pupillary opening are standard with advanced age [1], [42]. Various diseases of the eye can drastically alter the appearance of the iris [41], [42]. It also appears that intensive exposure to certain environmental contaminants (e.g., metals) can alter iris pigmentation [41], [42]. However, these conditions are rare. Claims that the iris changes with more general states of health (iridology) have been discredited [4], [56]. On the whole, these lines of evidence suggest that the iris is highly distinctive and, following childhood, typically stable. Nevertheless, it is important to note that large-scale studies that specifically address the distinctiveness and stability of the iris, especially as a biometric, have yet to be performed.
Another interesting aspect of the iris from a biometric point of view has to do with its moment-to-moment dynamics. Due to the complex interplay of the iris’ muscles, the diameter of the pupil is in a constant state of small
1349

(a)

(b)
Fig. 2. Anatomy of the human iris. (a) The structure of the iris seen in a transverse section. (b) The structure of the iris seen in a frontal sector. The visual appearance of the human iris derives from its anatomical structure.

oscillation [1], [16]. Potentially, this movement could be monitored to make sure that a live specimen is being evaluated. Further, since the iris reacts very quickly to changes in impinging illumination (e.g., on the order of hundreds of milliseconds for contraction), monitoring the reaction to a controlled illuminant could provide similar evidence. In contrast, upon morbidity, the iris contracts and hardens, facts that may have ramifications for its use in forensics.
Apparently, the first use of iris recognition as a basis for personal identification goes back to efforts to distinguish inmates in the Parisian penal system by visually inspecting their irises, especially the patterning of color [5]. More recently, the concept of automated iris recognition was
1350

proposed by Flom and Safir [20] It does not appear, however, that this team ever developed and tested a working system. Early work toward actually realizing a system for automated iris recognition was carried out at Los Alamos National Laboratories, CA [32]. Subsequently, two research groups developed and documented prototype irisrecognition systems [14], [52]. These systems have shown promising performance on diverse data bases of hundreds of iris images. Other research into automated iris recognition has been carried out in North America [48] and Europe [37]; however, these efforts have not been well documented to date. More anecdotally, a notion akin to automated iris recognition came to popular attention in the James Bond film Never Say Never Again, in which characters are
PROCEEDINGS OF THE IEEE, VOL. 85, NO. 9, SEPTEMBER 1997

Fig. 3. Schematic diagram of iris recognition. Given a subject to be evaluated (left of upper row) relative to a data base of iris records (left of lower row), recognition proceeds in three steps. The first step is image acquisition, which yields an image of the subject’s eye region. The second step is iris localization, which delimits the iris from the rest of the acquired image. The third step is pattern matching, which produces a decision, “D.” For verification, the decision is a yes/no response relative to a particular prespecified data base entry; for identification, the decision is a record (possibly null) that has been indexed relative to a larger set of entries.

depicted having images of their eye captured for the purpose of identification [22].
C. Outline This paper subdivides into four major sections. This first
section has served to introduce the notion of automated iris recognition. Section II describes the major technical issues that must be confronted in the design of an iris-recognition system. Illustrative solutions are provided by reference to the two systems that have been well documented in the open literature [14], [52]. Section III overviews the status of these systems, including test results. Last, Section IV provides concluding observations.
II. TECHNICAL ISSUES Conceptually, issues in the design and implementation
of a system for automated iris recognition can be subdivided into three parts (see Fig. 3). The first set of issues surrounds image acquisition. The second set is concerned with localizing the iris per se from a captured image. The third part is concerned with matching an extracted iris pattern with candidate data base entries. This section of the paper discusses these issues in some detail. Throughout the discussion, the iris-recognition systems of Daugman [12]–[14] and Wildes et al. [52]–[54] will be used to provide illustrations.
A. Image Acquisition One of the major challenges of automated iris recognition
is to capture a high-quality image of the iris while remaining noninvasive to the human operator. Given that the iris is a relatively small (typically about 1 cm in diameter), dark object and that human operators are very sensitive about
WILDES: IRIS RECOGNITION

their eyes, this matter requires careful engineering. Several points are of particular concern. First, it is desirable to acquire images of the iris with sufficient resolution and sharpness to support recognition. Second, it is important to have good contrast in the interior iris pattern without resorting to a level of illumination that annoys the operator, i.e., adequate intensity of source (W/cm ) constrained by operator comfort with brightness (W/sr-cm ). Third, these images must be well framed (i.e., centered) without unduly constraining the operator (i.e., preferably without requiring the operator to employ an eye piece, chin rest, or other contact positioning that would be invasive). Further, as an integral part of this process, artifacts in the acquired images (e.g., due to specular reflections, optical aberrations, etc.) should be eliminated as much as possible. Schematic diagrams of two image-acquisition rigs that have been developed in response to these challenges are shown in Fig. 4.
Extant iris-recognition systems have been able to answer the challenges of image resolution and focus using standard optics. The Daugman system captures images with the iris diameter typically between 100 and 200 pixels from a distance of 15–46 cm using a 330-mm lens. Similarly, the Wildes et al. system images the iris with approximately 256 pixels across the diameter from 20 cm using an 80-mm lens. Due to the need to keep the illumination level relatively low for operator comfort, the optical aperture cannot be too small (e.g., -stop 11). Therefore, both systems have fairly small depths of field, approximately 1 cm. Video rate capture is exploited by both systems. Typically, this is sufficient to guard against blur due to eye movements provided that the operator is attempting to maintain a steady gaze. Empirically, the overall spatial resolution and focus that results from these designs appear to be sufficient to sup-
1351

(a)

(b)
Fig. 4. Image-acquisition rigs for automated iris recognition. (a) A schematic diagram of the Daugman image-acquisition rig. (b) A schematic diagram of the Wildes et al. image-acquisition rig.

port iris recognition. Interestingly, additional investigations have shown that images of potential quality to support iris recognition can be acquired in rather different settings. For example, iris images can be acquired at distances up to a meter (using a standard video camera with a telephoto lens) [54]. Further, iris images can be acquired at very close range
1352

while an operator wears a head-mounted display equipped with light emitting diode (LED) illuminants and microminiature optics and camera [47]. However, iris images acquired in these latter fashions have received only very preliminary testing with respect to their ability to support recognition.
PROCEEDINGS OF THE IEEE, VOL. 85, NO. 9, SEPTEMBER 1997

Illumination of the iris must be concerned with the tradeoff between revealing the detail in a potentially low contrast pattern (i.e., due to dense pigmentation of dark irises) and the light sensitivity of human operators. The Daugman and Wildes et al. systems illustrate rather different approaches to this challenge. The former makes use of an LED-based point light source in conjunction with a standard video camera. The latter makes use of a diffuse source and polarization in conjunction with a low-light level camera. The former design results in a particularly simple and compact system. Further, by careful positioning of the light source below the operator, reflections of the point source off eyeglasses can be avoided in the imaged iris. Without placing undue restriction on the operator, however, it has not been possible to reliably position the specular reflection at the eye’s cornea outside the iris region. Therefore, this design requires that the region of the image where the point source is seen (the lower quadrant of the iris as the system has been instantiated) must be omitted during matching since it is dominated by artifact. The latter design results in an illumination rig that is more complex; however, certain advantages result. First, the use of matched circular polarizers at the light source and the camera essentially eliminates the specular reflection of the light source.2 This allows for more of the iris detail to be available for subsequent processing. Second, the coupling of a low light level camera (a silicon intensified camera [26]) with a diffuse illuminant allows for a level of illumination that is entirely unobjectionable to human operators. In terms of spectral distribution, both systems make use of light that is visible to human operators. It has been suggested, however, that infrared illumination would also suffice [14], [47]. Further, both systems essentially eschew color information in their use of monochrome cameras with 8-b gray-level resolution. Presumably, color information could provide additional discriminatory power. Also, color could be of use for initial coarse indexing through large iris data bases. For now, it is interesting to note that empirical studies to date suggest the adequacy of gray-level information alone (see, e.g., Section III).
The positioning of the iris for image capture is concerned with framing all of the iris in the camera’s field of view with good focus. Both the Daugman and Wildes et al. systems require the operator to self-position his eye region in front of the camera. Daugman’s system provides the operator with live video feedback via a miniature liquidcrystal display placed in line with the camera’s optics via a beam splitter. This allows the operator to see what the camera is capturing and to adjust his position accordingly.
2 Light emerging from the circular polarizer will have a particular sense of rotation. When this light strikes a specularly reflecting surface (e.g., the cornea), the light that is reflected back is still polarized but has reversed sense. This reversed-sense light is not passed through the camera’s filter and is thereby blocked from forming an image. In contrast, the diffusely reflecting parts of the eye (e.g., the iris) scatter the impinging light. This light is passed through the camera’s filter and is subsequently available for image formation [31]. Interestingly, a similar solution using crossed polarizers (e.g., vertical at the illuminant and horizontal at the camera) is not appropriate for this application: the birefringence of the eye’s cornea yields a low-frequency artifact in the acquired images [10].
WILDES: IRIS RECOGNITION

During this process, the system is continually acquiring images. Once a series of images of sufficient quality is acquired, one is automatically forwarded for subsequent processing. Image quality is assessed by looking for highcontrast edges marking the boundary between the iris and the sclera.
In contrast, the Wildes et al. system provides a reticle to aid the operator in positioning. In particular, a square contour is centered around the camera lens so that it is visible to the operator. Suspended in front of this contour is a second, smaller contour of the same shape. The relative sizes and positions of these contours are chosen so that when the eye is in an appropriate position, the squares overlap and appear as one to the operator. As the operator maneuvers, the relative misalignment of the squares provides continuous feedback regarding the accuracy of the current position. Once the operator has completed the alignment, he activates the image capture by pressing a button.
Subjectively, both of the described approaches to positioning are fairly easy for a human operator to master. Since the potential for truly noninvasive assessment is one of the intriguing aspects of iris recognition, however, it is worth underlining the degree of operator participation that is required in these systems. While physical contact is avoided, the level of required cooperativity may still prevent the systems from widespread application. In fact, it appears that all extant approaches to automated iris recognition require operator assistance for this purpose (i.e., as additionally reported in [32], [37], and [48]). Therefore, an interesting direction for future research involves the development of a system that automatically frames an operator’s iris over a larger three-dimensional volume with minimal operator participation. For example, the ability to locate a face within a range of about a meter and then to point and zoom a camera to acquire an image of the eye region has been demonstrated using available computer vision technology [23]. While this work is quite preliminary, it suggests the possibility of acquiring iris images in scenarios that are more relaxed than those required by current iris-recognition systems. The ability to perform this task in an effective and efficient manner is likely to have great implications for the widespread deployment of iris recognition.
For graphical illustration, an image of an iris, including the surrounding eye region, is shown in Fig. 5. The quality of this image, acquired from the Wildes et al. system, could be expected from either of the systems under discussion.
B. Iris Localization
Without placing undue constraints on the human operator, image acquisition of the iris cannot be expected to yield an image containing only the iris. Rather, image acquisition will capture the iris as part of a larger image that also contains data derived from the immediately surrounding eye region. Therefore, prior to performing iris pattern matching, it is important to localize that portion of the acquired image that corresponds to an iris. In particular, it is necessary to localize that portion of the image derived from inside the limbus (the border between the sclera and the iris) and
1353

Fig. 5. Example of captured iris image. Imaging of the iris must acquire sufficient detail for recognition while being minimally invasive to the operator. Image acquisition yields an image of the iris as well as the surrounding eye region.

outside the pupil. Further, if the eyelids are occluding part of the iris, then only that portion of the image below the upper eyelid and above the lower eyelid should be included. Typically, the limbic boundary is imaged with high contrast, owing to the sharp change in eye pigmentation that it marks. The upper and lower portions of this boundary, however, can be occluded by the eyelids. The pupillary boundary can be far less well defined. The image contrast between a heavily pigmented iris and its pupil can be quite small. Further, while the pupil typically is darker than the iris, the reverse relationship can hold in cases of cataract: the clouded lens leads to a significant amount of backscattered light. Like the pupillary boundary, eyelid contrast can be quite variable depending on the relative pigmentation in the skin and the iris. The eyelid boundary also can be irregular due to the presence of eyelashes. Taken in tandem, these observations suggest that iris localization must be sensitive to a wide range of edge contrasts, robust to irregular borders, and capable of dealing with variable occlusion.
1354

Reference to how the Daugman and Wildes et al. irisrecognition systems perform iris localization further illustrates the issues. Both of these systems make use of first derivatives of image intensity to signal the location of edges that correspond to the borders of the iris. Here, the notion is that the magnitude of the derivative across an imaged border will show a local maximum due to the local change of image intensity. Also, both systems model the various boundaries that delimit the iris with simple geometric models. For example, they both model the limbus and pupil with circular contours. The Wildes et al. system also explicitly models the upper and lower eyelids with parabolic arcs, whereas the Daugman system simply excludes the upper- and lower-most portions of the image, where eyelid occlusion is expected to occur. In both systems, the expected configuration of model components is used to fine tune the image intensity derivative information. In particular, for the limbic boundary, the derivatives are filtered to be selective for vertical edges. This directional selectivity is motivated by the fact that even in the face of
PROCEEDINGS OF THE IEEE, VOL. 85, NO. 9, SEPTEMBER 1997

occluding eyelids, the left and right portions of the limbus

should be visible and oriented near the vertical (assuming

that the head is in an upright position). Similarly, the deriva-

tives are filtered to be selective for horizontal information

when locating the eyelid borders. In contrast, since the

entire (roughly circular) pupillary boundary is expected to

be present in the image, the derivative information is used

in a more isotropic fashion for localization of this structure.

In practice, this fine tuning of the image information has

proven to be critical for accurate localization. For example,

without such tuning, the fits can be driven astray by

competing image structures (e.g., eyelids interfering with

limbic localization, etc.).

The two systems differ mostly in the way that they search

their parameter spaces to fit the contour models to the image

information. To understand how these searches proceed,

let

represent the image intensity value at location

and let circular contours (for the limbic and pupillary

boundaries) be parameterized by center location

and radius . The Daugman system fits the circular contours

via gradient ascent on the parameters

so as to

maximize

where

is a radial Gauss-

ian with center and standard deviation that smooths

the image to select the spatial scale of edges under con-

sideration, symbolizes convolution, is an element of

circular arc, and division by serves to normalize the

integral. In order to incorporate directional tuning of the

image derivative, the arc of integration is restricted to

the left and right quadrants (i.e., near vertical edges) when

fitting the limbic boundary. This arc is considered over a

fuller range when fitting the pupillary boundary; however,

the lower quadrant of the image is still omitted due to

the artifact of the specular reflection of the illuminant in

that region (see Section II-A). In implementation, the con-

tour fitting procedure is discretized, with finite differences

serving for derivatives and summation used to instantiate

integrals and convolutions. More generally, fitting contours

to images via this type of optimization formulation is a

standard machine vision technique, often referred to as

active contour modeling (see, e.g., [33] and [57]).

The Wildes et al. system performs its contour fitting in

two steps. First, the image intensity information is con-

verted into a binary edge-map. Second, the edge points vote

to instantiate particular contour parameter values. The edge-

map is recovered via gradient-based edge detection [2],

[44]. This operation consists of thresholding the magnitude

of the image intensity gradient, i.e.,

,

where

while

is a two-dimensional Gaussian with center

and

standard deviation that smooths the image to select the

WILDES: IRIS RECOGNITION

spatial scale of edges under consideration. In order to in-

corporate directional tuning, the image intensity derivatives

are weighted to favor certain ranges of orientation prior to

taking the magnitude. For example, prior to contributing

to the fit of the limbic boundary contour, the derivatives

are weighted to be selective for vertical edges. The voting

procedure is realized via Hough transforms [27], [28] on

parametric definitions of the iris boundary contours. In

particular, for the circular limbic or pupillary boundaries

and a set of recovered edge points

,

a Hough transform is defined as

where with

if otherwise

For each edge point

for

every parameter triple

that represents a circle

through that point. Correspondingly, the parameter triple

that maximizes is common to the largest number of edge

points and is a reasonable choice to represent the contour

of interest. In implementation, the maximizing parameter

set is computed by building

as an array that

is indexed by discretized values for

and . Once

populated, the array is scanned for the triple that defines its

largest value. Contours for the upper and lower eyelids are

fit in a similar fashion using parameterized parabolic arcs

in place of the circle parameterization

.

Just as the Daugman system relies on standard techniques

for iris localization, edge detection followed by a Hough

transform is a standard machine vision technique for fitting

simple contour models to images [2], [44].

Both approaches to localizing the iris have proven to be

successful in the targeted application. The histogram-based

approach to model fitting should avoid problems with local

minima that the active contour model’s gradient descent

procedure might experience. By operating more directly

with the image derivatives, however, the active contour

approach avoids the inevitable thresholding involved in

generating a binary edge-map. Further, explicit modeling

of the eyelids (as done in the Wildes et al. system) should

allow for better use of available information than sim-

ply omitting the top and bottom of the image. However,

this added precision comes with additional computational

expense. More generally, both approaches are likely to

encounter difficulties if required to deal with images that

contain broader regions of the surrounding face than the

immediate eye region. For example, such images are likely

to result from image-acquisition rigs that require less oper-

ator participation than those currently in place. Here, the

additional image “clutter” is likely to drive the current,

relatively simple model fitters to poor results. Solutions to

1355

Fig. 6. Illustrative results of iris localization. Given an acquired image, it is necessary to separate the iris from the surround. The input to the localization process was the captured iris image of Fig. 5. Following iris localization, all but the iris per se is masked out.

this type of situation most likely will entail a preliminary coarse eye localization procedure to seed iris localization proper. In any case, following successful iris localization, the portion of the captured image that corresponds to the iris can be delimited. Fig. 6 provides an example result of iris localization as performed by the Wildes et al. system.
C. Pattern Matching Having localized the region of an acquired image that
corresponds to the iris, the final task is to decide if this pattern matches a previously stored iris pattern. This matter of pattern matching can be decomposed into four parts:
1) bringing the newly acquired iris pattern into spatial alignment with a candidate data base entry;
2) choosing a representation of the aligned iris patterns that makes their distinctive patterns apparent;
3) evaluating the goodness of match between the newly acquired and data base representations;
1356

4) deciding if the newly acquired data and the data base entry were derived from the same iris based on the goodness of match.
1) Alignment: To make a detailed comparison between two images, it is advantageous to establish a precise correspondence between characteristic structures across the pair. Both of the systems under discussion compensate for image shift, scaling, and rotation. Given the systems’ ability to aid operators in accurate self-positioning, these have proven to be the key degrees of freedom that required compensation. Shift accounts for offsets of the eye in the plane parallel to the camera’s sensor array. Scale accounts for offsets along the camera’s optical axis. Rotation accounts for deviation in angular position about the optical axis. Nominally, pupil dilation is not a critical issue for the current systems since their constant controlled illumination should bring the pupil of an individual to the same size across trials (barring illness, etc.). For both systems, iris localization is charged with isolating an iris in a larger acquired image and
PROCEEDINGS OF THE IEEE, VOL. 85, NO. 9, SEPTEMBER 1997

thereby essentially accomplishes alignment for image shift.

Daugman’s system uses radial scaling to compensate for

overall size as well as a simple model of pupil variation

based on linear stretching. This scaling serves to map

Cartesian image coordinates

to dimensionless polar

image coordinates

according to

where lies on

and is cyclic over

, while

and

are the coordinates of

the pupillary and limbic boundaries in the direction .

Rotation is compensated for by explicitly shifting an iris

representation in by various amounts during matching.

The Wildes et al. system uses an image-registration

technique to compensate for both scaling and rotation.

This approach geometrically warps a newly acquired image

into alignment with a selected data base image

according to a mapping function

such that for all

, the image intensity value at

in is close to that at

in . More

precisely, the mapping function

is taken to minimize

Since the Daugman system converts to polar coordinates during alignment, it is convenient to give the filters
in a corresponding form as

where and covary in inverse proportion to to

generate a set of quadrature pair frequency-selective filters

with center locations specified by

. These filters

are particularly notable for their ability to achieve good

joint localization in the spatial and frequency domains.

Further, owing to their quadrature nature, these filters

can capture information about local phase. Following the

Gabor decomposition, Daugman’s system compresses its

representation by quantizing the local phase angle according

to whether the real, , and imaginary, , filter outputs

are positive or negative. For a filter given with bandpass

parameters

and and location

, a pair of bits

is generated according to

if

while being constrained to capture a similarity transforma-

tion of image coordinates

to

, i.e.,

with a scaling factor and

a matrix representing

rotation by . In implementation, given a pair of iris images

and , the warping parameters and , are recovered

via an iterative minimization procedure [3].

As with much of the processing that the two iris-

recognition systems perform, the methods for establishing

correspondences between acquired and data base iris images

seem to be adequate for controlled assessment scenarios.

Once again, however, more sophisticated methods may

prove to be necessary in more relaxed scenarios. For

example, a simple linear stretching model of pupil

dilation does not capture the complex physical nature

of this process, e.g., the coiling of blood vessels and the

arching of stromal fibers. Similarly, more complicated

global geometric compensations will be necessary if

full perspective distortions (e.g., foreshortening) become

significant.

2) Representation: The distinctive spatial characteristics

of the human iris are manifest at a variety of scales. For

example, distinguishing structures range from the overall

shape of the iris to the distribution of tiny crypts and

detailed texture. To capture this range of spatial detail, it

is advantageous to make use of a multiscale representation.

Both of the iris-recognition systems under discussion make

use of bandpass image decompositions to avail themselves

of multiscale information. The Daugman system makes use

of a decomposition derived from application of a two-

dimensional version of Gabor filters [21] to the image data.

WILDES: IRIS RECOGNITION

if

if

if

The parameters

and are sampled so as to

yield a 256-byte representation that serves as the basis

for subsequent processing. In implementation, the Gabor

filtering is performed via a relaxation algorithm [11], with

quantization of the recovered phase information yielding

the final representation.

The Wildes et al. system makes us of an isotropic band-

pass decomposition derived from application of Laplacian

of Gaussian filters [25], [29] to the image data. These filters

can be specified as

with the standard deviation of the Gaussian and the

radial distance of a point from the filter’s center. In practice,

the filtered image is realized as a Laplacian pyramid [8],

[29]. This representation is defined procedurally in terms

of a cascade of small Gaussian-like filters. In particular,

let

be a one-dimensional mask and

be the two-dimensional mask that results from

1357

Preparing to load PDF file. please wait...

0 of 0
100%
Iris Recognition: An Emerging Biometric Technology