Chapter 12 Bank Examination and Enforcement


Download Chapter 12 Bank Examination and Enforcement


Preview text

Chapter 12
Bank Examination and Enforcement
Introduction
The 1980s and early 1990s were undoubtedly a period of greater stress and turmoil for U.S. financial institutions than any other since the Great Depression. Over this period more than 1,600 commercial and savings banks insured by the FDIC were closed or received FDIC financial assistance. As a consequence, the bank regulatory system came under intense scrutiny, and fundamental questions were raised about its effectiveness in anticipating and limiting the number of bank failures and losses to the deposit insurance fund.
Effective supervision can be achieved in two ways: (1) problems can be recognized early, so that corrective measures can be taken and the bank returned to a healthy condition; (2) supervision can limit losses by closely monitoring troubled institutions, limiting their incentives to take excessive risks, and ensuring their prompt closure when they become insolvent or when their capital falls below some critical level.
This chapter reviews and analyzes the bank supervisory system during the 1980s and early 1990s by focusing principally upon bank examination and enforcement polices. The first part surveys the federal agencies’ bank examination policies during the 1980s and early 1990s and discusses how changes in bank supervisory philosophy affected examination staffing and frequency, and what the implications of these policies were for losses to the deposit insurance fund. The second part presents a retrospective on the effectiveness of bank supervisory tools used during this period, focusing on the ability to identify troubled banks and the ability to limit risk taking in these institutions by applying enforcement actions. The final part of the chapter discusses the implications for the bank supervisory process of the Prompt Corrective Action (PCA) provisions of the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA). An appendix describes the bank examination process, including the bank rating system and the nature and types of regulatory enforcement actions.

An Examination of the Banking Crises of the 1980s and Early 1990s

Volume I

Bank Supervisory Policies, 1980–1994
Given the constraints imposed on banking activities by the chartering authorities and by legislation and regulation, the primary tools the banking agencies use to ensure the health and stability of the financial system and the solvency of the bank and thrift insurance funds are bank examinations and enforcement actions. Currently there are four basic types of bank examinations. The first focuses on the bank’s trust department, to determine whether it is being operated in accordance with established regulations and standards. The second investigates whether the bank is in compliance with various measures designed to protect consumers, such as truth-in-lending requirements, civil rights laws, and community reinvestment regulations. A third type of bank examination focuses on the integrity of the bank’s electronic data processing (EDP) systems. Finally and most important, safety-andsoundness examinations focus on five key areas affecting the health of the institution: capital adequacy, asset quality, management, earnings, and liquidity (CAMEL).1 A bank is rated from 1 to 5 in each area, or component (1 representing the highest rating, 5 the lowest rating). After the overall condition of the bank is evaluated, a composite safety-and-soundness rating, known as a CAMEL rating, is also assigned. A composite CAMEL rating of 1 is given to banks performing well above average. A rating of 2 is given to banks operating adequately within safety-and-soundness standards. A CAMEL rating of 3 indicates belowaverage performance and some supervisory concerns. Performance well below average yields a CAMEL rating of 4, indicating that serious problems exist at the bank and need to be corrected. Finally, a CAMEL rating of 5 indicates severely deficient performance and the high probability of failure within 12 months. (The appendix includes a detailed description of the CAMEL rating system.) A serious deficiency in any of the areas covered by trust, EDP, and safety-and-soundness exams could lead to failure, but only safety-and-soundness examinations, because of their broad coverage, are discussed here.
Through the early 1970s, all banks—regardless of size and condition—received an examination approximately every 12 months.2 But in the middle to late 1970s, bank supervision policy changed significantly, and the change remained in place through the first half of the 1980s. The banking agencies began placing relatively more weight on off-site sur-

1 As of January 1, 1997, the bank and thrift regulatory agencies added a sixth component to the safety-and-soundness examination, known as the “sensitivity-to-market-risk” component. After that date, therefore, the CAMEL rating system would be referred to as “CAMELS.” The new component evaluates how well institutions are prepared to protect bank earnings and capital from shifts in interest rates, in foreign exchange rates, and in commodity prices, and from fluctuations in portfolio values. In this chapter, the sixth component is not discussed.
2 The discussion of examination staffing and frequency is partly based on Lee Davison, “Bank Examination and Supervision” (unpublished paper), FDIC, February 1996.

422

History of the Eighties—Lessons for the Future

Chapter 12

Bank Examination and Enforcement

veillance and relatively less on on-site examinations.3 This shift occurred partly because the Call Report data furnished by banks were increasingly comprehensive and partly because sophisticated computer models had been developed for analyzing these data; the increases in comprehensiveness and analytical ability allowed the agencies to make extensive use of off-site surveillance. They viewed off-site analysis as potentially reducing the need for onsite examination visits in nonproblem institutions; it would also reduce examination costs and the burden upon banks. These decisions had widespread implications for subsequent examiner staffing levels and examination frequency, both of which were being reduced during the first half of the 1980s. By the latter half of the decade, however, off-site analysis had become relatively less important in the bank evaluation process vis-à-vis on-site examinations;4 and with passage of FDICIA, frequent on-site examinations again became required, this time as a matter of law.
Other important changes in supervisory activity also occurred during the 1980s. Both the Office of the Comptroller of the Currency (OCC) and the FDIC sought to concentrate more examination resources on banks that posed greater systemic risk and relatively less on nonproblem institutions.5 All three agencies began cooperative examination programs during the early 1980s.6 Both the FDIC and the Federal Reserve System increasingly made use of state bank examinations for nonproblem institutions, often alternating examinations with state regulators in a move to increase efficiency. (See the appendix to this chapter for additional details.)
OCC Policies
The National Bank Act of 1864 mandated that the OCC examine all national banks twice a year but allowed an extension to three examinations every two years. This policy stood until 1974, when the Comptroller of the Currency commissioned a review of the agency’s operations from Haskins & Sells, a national accounting firm.7 The Haskins & Sells report had a major impact on the theory and practice of federal bank supervision. It criticized the OCC’s existing examination policy as inefficient and recommended that the
3 This shift in policy took place primarily at the Office of the Comptroller of the Currency and the FDIC. Although the Federal Reserve System enhanced its off-site surveillance capabilities as well, it did not significantly reduce its commitment to annual examinations for state member banks regardless of size.
4 FDIC, Annual Report (1990), 20. 5 The targeting of problem banks for more frequent examinations and enhanced supervision is documented in John O’Keefe
and Drew Dahl, “The Scheduling and Reliability of Bank Examinations: The Effect of FDICIA” (unpublished paper), October 1996. 6 The cooperative examination programs primarily meant that the two federal banking agencies that had regulatory oversight of state banks (the FDIC and the Federal Reserve System) accepted state examinations in place of federal examinations if certain conditions were satisfied. In addition, all three federal banking agencies occasionally scheduled joint examinations, and they shared examination information with each other as needed. 7 The review was ordered primarily in response to the failure of the United States National Bank.

History of the Eighties—Lessons for the Future

423

An Examination of the Banking Crises of the 1980s and Early 1990s

Volume I

agency make greater use of statistical, computerized off-site analysis, focus examination resources on weak banks, and, in examinations, put more emphasis on evaluating bank management and systems of internal control and less on doing detailed audits of bank assets.8 These recommendations were gradually adopted beginning in 1976, when the OCC extended examination schedules to 18 months for banks with total assets of less than $300 million.9 At the same time, the OCC also established a risk-based examination structure by categorizing banks according to size: multinational, regional, and community.10
This risk-based structure was further refined under the “hierarchy of risk” policy in 1984. This new approach defined risk categories according to a bank’s size and perceived condition. Resident examiners were placed in the 11 largest multinational banks in 1986, and beginning early in the 1990s some larger regional banks also received resident examiners. In general, on-site resources moved toward the larger institutions and away from smaller banks that were perceived to have no problem. This development was accompanied by the increased use of continuous off-site analysis as well as by the use of targeted examinations (examinations that focused on a particular segment of a bank’s business) rather than full-scope examinations.11
FDIC Policies
Until 1976, the FDIC required that all institutions under its supervision receive a fullscope examination annually. Starting in 1976 and continuing through the early 1980s, the examination schedule was stretched out: only problem banks (those with CAMEL ratings of 4 or 5) were required to receive an annual full-scope examination; banks with lesser problems (CAMEL 3) were to be examined (full scope) at least every 18 months; and banks in satisfactory condition (CAMEL 1 or 2) were to receive either a full-scope or a modified (that is, somewhat less comprehensive) examination at least every 18 months.12 During the early 1980s, the FDIC also started to emphasize the expanded use of off-site monitoring as

8 See OCC, Haskins & Sells Study: 1974–75 (1975), A2–6. See also Jesse Stiller, OCC Bank Examination: A Historical Overview, OCC, 1995, and Eugene N. White, The Comptroller and the Transformation of American Banking, 1960–1990 (1992), 32–34.
9 White, Comptroller, 38. 10 Stiller, OCC Bank Examination, 27–28. 11 In 1982, Comptroller C. T. Conover noted that in 1980 the OCC put 70 percent of its effort into examining banks consti-
tuting only 20 percent of national bank assets and said the agency had to “examine smarter” by reducing the frequency of on-site examinations of small banks (changing the normal frequency for such banks from 18 months to 3 years) and by supplementing examinations with bank visitations (Linda W. McCormick, “Comptroller Begins Major Revamp,” American Banker [April 29, 1982], 15). The movement toward electronic off-site analysis was symbolized by the cake at the OCC’s 120th anniversary celebration in 1983: it was made in the shape of a computer (Andrew Albert, “Comptroller’s Office Throws a Bash,” American Banker [November 4, 1983], 16). 12 FDIC, Annual Report (1979), 4. For banks rated 1 or 2 in states where state examinations were accepted, the FDIC allowed alternating federal and state exams (FDIC, Annual Report [1980], 8–9).

424

History of the Eighties—Lessons for the Future

Chapter 12

Bank Examination and Enforcement

well as the prioritization of examinations, which were to focus primarily upon problem institutions or those that posed the most risk to the deposit insurance fund. In 1983, the examination interval for nonproblem banks was extended to 36 months. By 1985, problem banks (CAMEL 4- and 5-rated) were to receive examinations every 12–18 months, CAMEL 3-rated banks every 12–24 months, and higher-rated institutions every 36 months, though for banks with less than $300 million in total assets this could be extended to five years.13
By 1986, facing a record number of problem banks, some of which had been highly rated, the FDIC revised its examination policies. The new policy called for all 1- and 2rated banks to receive on-site examinations at least every 24 months, and all other banks to be examined by either the FDIC or state examiners at least every year. At year-end 1986, 1,814 commercial banks subject to FDIC supervision had not been examined in three years; by 1988 the number was reduced to 197, and by the following year, to 92.14 With the passage of FDICIA, the return to the examination policies of the 1970s was complete: the law mandated annual on-site examinations of all banks except highly rated small institutions, for which the interval could be extended to 18 months.
Federal Reserve Policies
The Federal Reserve System (FRS) also changed its examination policies in the early 1980s, placing more emphasis on remote surveillance and slightly stretching out examination schedules, but it varied the examination frequency much less than the other agencies did. In 1981, the FRS shifted from a policy of annual examinations for all state member banks to one that allowed the interval to extend to 18 months.15 This policy remained in place until 1985, when the previous annual requirement for state member banks was reinstated.16

13 FDIC, Annual Report (1983), xi; and Annual Report (1985), 14–15. The expanded intervals for on-site examinations were paired with the requirement that either bank visitations or off-site reviews be undertaken at least annually for 1- and 2-rated banks, every six months for 3-rated banks, and every three months for 4- and 5-rated banks. Visitations by bank regulators generally involve meetings with bank officials to discuss a variety of issues concerning the bank’s operations. Some examples of these issues are compliance with formal and informal corrective orders, progress in correcting deficiencies noted at the previous examination, and any other issues deemed relevant to the sound operations of the bank.
14 FDIC, Annual Report (1988), 2; and Annual Report (1989), 8. 15 FRB, Annual Report (1981), 180. 16 There were gradations to the Federal Reserve policy. Multinational state member banks and all banks with more than $10
billion in assets were to receive annual full-scope examinations as well as (in most cases) an additional targeted examination. Such examinations had to be conducted either independently by the Federal Reserve or jointly with state authorities. Gradations of smaller banks allowed progressively less Federal Reserve involvement with examinations, but in all cases annual examinations were still mandated. See “Fed Policy for Frequency and Scope of Examinations of State Member Banks and Inspections of Bank Holding Companies,” American Banker (October 10, 1985), 4–5; on follow-up meetings, see American Banker (October 11, 1985), 4.

History of the Eighties—Lessons for the Future

425

An Examination of the Banking Crises of the 1980s and Early 1990s

Volume I

Examination Staffing and Frequency
The agencies’ shift in supervisory philosophy in the early 1980s, placing more emphasis on off-site analysis and relatively less on on-site examination, had major implications for examination staffing and therefore for the ability to detect problem institutions at early stages. From 1979 through 1984 both the FDIC and the OCC reduced their examiner resources: the FDIC’s field examination staff declined 19 percent, from 1,713 to 1,389, and the OCC’s declined 20 percent, from 2,151 to 1,722. The Federal Reserve’s examination capacity remained almost unchanged. State examiner levels, however, declined, from approximately 2,496 to 2,201. From 1979 through 1984, overall examiner resources at federal and state levels declined by 14 percent, from 7,165 to 6,132 (see figure 12.1).17
This substantial reduction in staff, especially at the federal level, came about primarily by means of a series of freezes on the hiring of new examiners at the FDIC and the OCC in the late 1970s and the early 1980s; these freezes were consistent with the policies of increased off-site surveillance and with the desire of first the Carter administration and then the Reagan administration to lessen the size of government.18 As a consequence of the freezes, staff shortages developed in subsequent years and continued until and even beyond the mid-1980s. By year-end 1985, for example, staffing levels at the FDIC were 25 percent below authorized levels. In addition to freezes in hiring, high turnover rates among examiners also helped produce shortages in examiner staffs. The high turnover rates were due in part to the pay differential between the banking agencies and the private sector. Unfilled examiner vacancies persisted until the mid-1980s, when the agencies started to hire new examiners as the number of problem banks increased (rising from 217 to 1,140 between 1980 and 1985—more than a fivefold increase). Thus, during a period of rapidly growing instability in banking with an unprecedented number of problem banks, the agencies’ examination staffs consisted of large numbers of inexperienced personnel. As a consequence, experienced staff were forced to devote considerable effort to training new examiners and were correspondingly less available to conduct work on safety-and-soundness examina-

17 The reduction in examination staff and examination frequency over the period 1981–85 was not a function of a reduced number of banks or assets under supervision by the regulatory agencies. For the OCC, for example, the number of national banks increased from 4,468 to 4,959; total assets under supervision increased from $1.2 trillion to $1.6 trillion; and the assets per examiner for all national banks increased from $668 million to $910 million. (In Texas, the number of national banks increased from 694 to 1,058.) For the FDIC, the number of state nonmember banks did decline about 5 percent, going from 9,257 to 8,767, but the total assets under supervision increased from $589 billion to $805 billion, and the assets per examiner increased from $355 million to $520 million. (In Texas, the number of state nonmember banks actually increased slightly, going from 786 to 808.) For the Federal Reserve, the total number of state member banks increased from 1,020 to 1,070; the total assets under supervision increased from approximately $387 billion to $495 billion; and assets per examiner grew from $484 million to $593 million. (There were only a small number of state member banks in Texas.)
18 Under the directives of the Reagan administration in 1981, the OCC instituted a hiring freeze for all examiners. The FDIC, as an independent agency, was under no legal obligation to follow suit but chose to freeze its examination staff in 1981. In the late 1970s, the Carter administration had also attempted to limit the size of the federal work force.

426

History of the Eighties—Lessons for the Future

Chapter 12

Bank Examination and Enforcement

Figure 12.1

Field Examination Staffs of the Federal and State Banking Agencies, and Total Number of Problem Banks,
1979–1994

Number of Examiners

Number of Problem Banks

10,000

9,614 1,600

9,000

Number of Problem Banks* (CAMEL Rating
of 4 and 5)

1,200

8,000

800

7,165

7,000

Number of

400

Examiners

6,132

6,000

0 1980 1982 1984 1986 1988 1990 1992 1994

Sources: FDIC, FRB, OCC, and Conference of State Bank Supervisors.
* Because problem banks were not classified as those having 4 and 5 CAMEL ratings until 1980, the number of problem banks for 1979 is not included. Total number of examiners includes all federal and state bank regulators.

tions.19 From 1986 to 1992, for example, approximately half of the supervisory staff at the FDIC consisted of assistant examiners with less than three years’ experience.
Furthermore, as problem banks multiplied in the Midwest and Southwest, resources were shifted from areas with seemingly healthy banks, such as the Northeast. Experienced FDIC examiners in the Northeast routinely spent a quarter of their time out of the region assisting with problems elsewhere. Moreover, as bank failures increased, bank examination personnel were detailed to support bank resolution activities. In 1984, the FDIC deployed 11 percent of its total examination staff time to such matters. This shift of resources among regions and across functions placed additional pressure on the examination force’s ability to detect problem banks, especially in a seemingly healthy area like New England, where a crisis was about to erupt.

19 The training cycle for newly hired examiners is lengthy and complicated; approximately three to five years are required before a new hire is a fully trained, commissioned examiner.

History of the Eighties—Lessons for the Future

427

An Examination of the Banking Crises of the 1980s and Early 1990s

Volume I

The reduction in examination staff, as mentioned above, was partly a side effect of the agencies’ decision to reduce the number of bank examinations and increase the median interval between examinations. The total number of examinations declined from a high of approximately 12,267 in 1981 to a low of approximately 8,312 in 1985, a drop of more than 30 percent (see figure 12.2). By far the largest decline occurred at state nonmember banks, where on-site examinations decreased more than 40 percent, from approximately 8,000 in 1981 to approximately 4,600 during 1985. Declines were more moderate for national banks and state member banks: both declined less than 15 percent during the same period. In addition to frequency, the scope of examinations was also curtailed, as limited resources gave the agencies no option but to continue to modify their examination procedures.
Reductions in examination frequency are tantamount to extensions of examination intervals. Between 1979 and 1986, the mean examination interval in days for all commercial and savings banks increased dramatically from 379 to 609 (see table 12.1). The intervals were increasing for all CAMEL rating categories, but especially for highly rated institutions. For 1-rated banks, the interval increased from 392 to 845 days; for 2-rated banks, from 396 to 656 days. The interval also grew for poorly rated institutions, but not as much.

Figure 12.2

Total Number of Examinations per Year and Total Number of Problem Banks, 1980–1994

Number of Examinations 18,000

Number of Problem Banks 1,600

15,000

Number of Problem Banks (CAMEL Rating
of 4 and 5)

16,549 1,200

12,267

800

12,000

9,000

400

8,312

Number of Examinations*

0 1980 1982 1984 1986 1988 1990 1992 1994

Sources: FDIC, FRB, and OCC. * Total number of examinations includes all examinations conducted by federal agencies
and all state examinations accepted by federal authorities.

428

History of the Eighties—Lessons for the Future

Chapter 12

Bank Examination and Enforcement

Table 12.1
Mean Examination Interval for Commercial Banks, by CAMEL Rating, 1979–1994
(Days)

Composite CAMEL Rating

Year

1

2

3

4

5

1979

392

396

338

285

257

1980

456

460

402

312

286

1981

493

482

342

279

236

1982

459

446

321

262

249

1983

500

450

309

261

243

1984

620

499

327

303

270

1985

761

596

369

324

284

1986

845

656

407

363

313

1987

754

597

386

354

284

1988

615

497

376

339

315

1989

562

487

373

324

296

1990

463

436

331

303

270

1991

420

412

323

286

273

1992

409

396

319

291

278

1993

400

379

296

286

232

1994

380

357

296

279

245

Sources: FDIC, FRB, and OCC.

All Banks
379 450 472 434 436 480 564 609 556 477 466 411 385 373 363 354

For 4-rated banks, the interval increased from 285 to 363 days; for 5-rated banks, from 257 to 313 days. These data indicate that the regulatory policy in the early 1980s of focusing more resources on the examination of troubled banks and thus reducing examination intervals for these organizations was generally not being carried out successfully.20
Data on examination intervals by bank regulatory agency show that for the period 1980–86, overall examination intervals increased for all three agencies (see table 12.2). For the OCC, the interval increased about 45 percent, or from 417 to 604 days. For the FDIC, 37 percent, or from 460 to 628 days. The increase for banks supervised by the Federal Reserve was a more modest 27 percent, from 411 to 520 days.
The reductions in examination frequency were most pronounced in the Southwest, particularly Texas, which had the largest concentration of problem and failed banks and

20 A study specifically of Texas banks reaches the same conclusion (John O’Keefe, “The Texas Banking Crisis: Causes and Consequences 1980–1989,” FDIC Banking Review 3, no. 2 [1990]: 12).

History of the Eighties—Lessons for the Future

429

An Examination of the Banking Crises of the 1980s and Early 1990s

Volume I

Table 12.2
Mean Examination Interval for Commercial Banks, by Regulatory Agency, 1980-1994
(Days)

Year

OCC

FDIC

FRS

1980

417

460

411

1981

521

451

502

1982

468

415

503

1983

469

415

514

1984

529

446

503

1985

567

568

532

1986

604

628

520

1987

511

580

516

1988

552

452

461

1989

589

415

461

1990

482

379

439

1991

445

356

414

1992

422

351

404

1993

433

333

386

1994

395

333

401

produced the greatest losses to the insurance fund.21 In Texas, for example, the average number of examinations for all banks declined from a high of more than 1,200 in 1983 to approximately 600 at year-end 1985 (see figure 12.3). This decline is reflected in the median number of days between examinations for all failed banks in the region (see figure 12.4). In the Southwest as a whole, the median interval for failed banks reached a high of 579 days in 1986; for failed Texas banks, it reached 667 days. The average for all U.S. banks that failed in the same year was substantially lower: 455 days.
Bank examination staffs and examination frequency continued to increase during the second half of the 1980s and into the 1990s, as all of the agencies attempted to deal with the backlog of problem banks. In 1993 the number of field examiners reached a high for all federal and state agencies (9,614), up more than 30 percent over the number in 1979 (figure 12.1). In addition, the total number of examinations began trending upward beginning in 1985, until by the early 1990s the number of annual examinations reached the levels of the

21 For a more complete discussion of the issue of examination frequency in Texas and the Southwest during the 1980s, see O’Keefe, “The Texas Banking Crisis,” 1–14.

430

History of the Eighties—Lessons for the Future

Preparing to load PDF file. please wait...

0 of 0
100%
Chapter 12 Bank Examination and Enforcement