Jump to content

Full disclosure: The perils and promise of transparency

From Wikisource
Full disclosure: The perils and promise of transparency (2008)
Archon Fung, Mary Graham, David Weil
Appendix: Eighteen Major Cases
662528Full disclosure: The perils and promise of transparency — Appendix: Eighteen Major Cases2008Archon Fung, Mary Graham, David Weil

APPENDIX


Eighteen Major Cases



This book is based on an analysis of fifteen major U.S. and three international transparency policies. This Appendix provides a summary of the legislative history, purpose, provisions, politics, and dynamics of each of these policies. We have categorized U.S. cases by two broad policy objectives: reducing risks to the public and improving the quality and fairness of critical services. The three international policies are described in the final portion of this Appendix.

Further detail on each of these cases and links to related materials are available at the Transparency Policy Project Web site: http://www.transparencypolicy.net/.


TARGETED TRANSPARENCY IN THE UNITED STATES


Reducing Risks to the Public

Disclosing Corporate Finances to Reduce Risks to Investors
Created as a response to crisis, the United States’ system of corporate financial disclosure was cobbled together in 1933 and 1934 as a pragmatic compromise. Millions of Americans were left holding worthless securities when the stock market crashed in October 1929. By 1932, the value of stocks listed on the New York Stock Exchange had fallen by 83 percent. Congressional hearings revealed patterns of inflated earnings, insider trading, and secret deals by J. P. Morgan, National City, and other banks, hidden practices that contributed to the precipitous decline of public confidence in securities markets. Echoing Louis D. Brandeis's declaration that "sunlight is…the best disinfectant," Franklin D. Roosevelt, the nation’s newly elected president, championed legislation to expose financial practices to public scrutiny.[1]

The Securities and Exchange Acts of 1933 and 1934 required that publicly traded companies disclose information about their finances in standardized form in quarterly and annual reports. Congress also authorized the newly created Securities and Exchange Commission (SEC) to issue uniform accounting standards for company financial disclosures. To gain support for a workable compromise, the disclosure requirements excluded banks, railroads, and many companies. Felix Frankfurter, Roosevelt’s senior adviser on the legislation, called the Securities Act a “modest first installment” in protecting the public from hidden risks.[2]

Later crises strengthened disclosure requirements.[3] In the 1960s, the scope of disclosure was broadened when an unprecedented wave of conglomerate mergers followed by a sudden collapse of their stock prices created pressures for better information. Congress responded in 1968 with the Williams Act, which required disclosure of cash tender offers that would change ownership of more than 10 percent of company stock; Congress strengthened the law two years later by lowering the threshold for reporting to 5 percent and adding disclosure of product-line data.[4]

In 1969 and 1970, the Accounting Principles Board, an outdated instrument of accounting industry self-government, was replaced with the current Financial Accounting Standards Board (FASB) as one way to improve investors’ confidence in the disclosure system. The new private-sector board had authority to set accounting standards and featured broader representation and funding, a larger professional staff, and a better system of accountability. Over time, the board substantially tightened accounting standards.[5]

In the late 1970s, congressional investigations raised new questions about FASB’s domination by big business. In response the board opened meetings, allowed public comment on proposals, provided weekly publication of schedules and decisions on technical issues, framed industry-specific accounting standards, analyzed economic consequences of proposed actions, and eliminated a requirement that a majority of its members be chosen from the accounting profession.[6]

Over the years, other crises broadened the scope of disclosure and improved the accuracy and use of information. In 1970, for example, after 160 brokerages failed, Congress required new disclosures from broker-dealers concerning their management and financial stability.[7] In 1977, Congress broadened transparency in response to publicity about bribes and illegal campaign contributions by corporate executives.[8] Lapses in management in some of the nation’s largest corporations led the SEC to issue rules in 1978 and 1979 that required new disclosures concerning the independence of board members, board committee oversight of company operations, and failure of directors to attend meetings.[9] In the 1990s, increases in individual investing and the rise of online investing led the SEC to adopt "plain English" disclosure rules, which required prospectuses filed with the agency to be written in short, clear sentences using nontechnical vocabulary and featuring graphic aids.[10]

The sudden collapse of Enron Inc. in December 2001 once again created a crisis-response scenario that generated pressures to improve corporate financial reporting. Shareholders lost their savings and employees lost retirement funds when the nation's largest energy trader filed for bankruptcy.

Enron’s collapse pointed to systemic problems with the United States' most trusted public disclosure system. The SEC charged executives of Waste Management, WorldCom, Adelphia Communications, Tyco International, Dynergy, Safety-Kleen Corp., and other large companies with a variety of offenses related to withholding information from the public. Executives of Enron, WorldCom, and other large companies were indicted for fraud and other offenses. Ten large investment firms settled with the SEC, the New York State attorney general, and other regulators for permitting improper influence of their research analysts by their investment banking interests. Arthur Andersen, Enron’s auditor, was charged with obstruction of justice for destroying auditing documents, a blow to the firm’s reputation that drove it out of business. Evidence of collaboration by accounting firms that also earned huge consulting fees, stock boosting by analysts, and inadequate oversight by company boards, as well as a declining stock market, once again called into question the integrity of the corporate financial disclosure system.[11]

The systemic problem was that the disclosure system had failed to keep pace with changing markets. After the fact, Congress’s General Accounting Office (GAO) concluded that

changes in the business environment, such as the growth in information technology, new types of relationships between companies, and the increasing use of complex business transactions and financial instruments, constantly threaten the relevance of financial statements and pose a formidable challenge to standard setters….Enron’s failure…raised…issues…such as the need for additional transparency, clarity, more timely information, and risk-oriented financial reporting.[12]

By 2002, another round of disclosure reform was under way. Public companies, accounting firms, stock exchanges, analysts, and other participants in securities markets all made voluntary changes. On July 30, 2002, President George W. Bush signed into law the most far-reaching reforms of financial disclosure since the 1930s. The Sarbanes-Oxley Act, sponsored by Senator Paul Sarbanes (D-Md.), senior Democrat on the Senate Banking Committee, and Representative Mike Oxley (R-Ohio), chair of the House Financial Services Panel, created a new agency charged with watching over the accounting watchdogs. The private, nonprofit Public Company Accounting Oversight Board, consisting of five members appointed by the president and a staff of five hundred, was authorized to establish auditing standards, monitor accounting firms' practices, and fine them for improprieties.

The law also limited consulting services that auditors could offer to corporate clients and required rotation of partners assigned to corporations every five years. It established new criminal penalties, including twenty-five-year jail terms for securities fraud and twenty-year terms for destroying records. It required chief executives and financial officers to certify financial reports and required that material changes in financial condition be disclosed immediately in plain English. It also established a restitution fund for wronged shareholders. In what would become the law's most controversial provision – because of its high cost, as its requirements were translated into new demands on companies by outside auditors – section 404 held managers responsible for maintaining adequate internal controls over financial reporting.[13]

In other disclosure reforms, the SEC required public companies to file annual and quarterly reports more quickly (generally annual reports within sixty rather than ninety days after the end of the year and quarterly reports within thirty-five rather than forty-five days after the end of the quarter). New disclosure rules also required expensing of stock options, fuller financial disclosure by mutual funds, and more information about executive pay.[14]

The accounting scandals of 2001 and 2002 also led to new ideas about making financial reporting more useful to investors. A forum convened by the GAO in December 2002 noted that the model of financial reporting had not changed since the 1970s and was "driven by the supply side…accountants, regulators, and corporate management and boards of directors."[15] The GAO suggested layering reporting to give users the information they needed and encouraging “demand-side,” user-centered disclosure reforms.[16]

In an interesting complementary effort to improve the capacity of information users to understand financial information, Congress also approved the Financial Literacy and Education Improvement Act, which created a commission to develop a national strategy to promote financial literacy. The new law responded to research that suggested that many Americans lacked the knowledge needed to make informed financial judgments.[17]

In 2006, the reform of the corporate financial disclosure system remained a work in progress. The costs of more rigorous disclosure, especially to small businesses, and the reach of reforms to companies headquartered in other countries were among the many controversial political issues. It remained to be seen whether recent legislative cures in fact would reduce underreporting and misreporting by companies and prove cost-effective in the long run.

Disclosing Chemical Hazards to Reduce Workplace Health and Safety Risks

A National Institute for Occupational Safety and Health survey conducted in 1972 found that “approximately 25 million U.S. workers, or one in four, [were] potentially exposed to one or more of ...nearly 8000 hazards” and that 40 to 50 million Americans, amounting to over 20 percent of the population, may have been exposed to hazardous chemicals.[18] Often neither employers nor employees were aware of the presence of hazardous substances in the workplace. Lack of knowledge hampered diagnosis and treatment when workers became ill from chemical exposure.

Responding to this problem in the 1970s, unions, public interest groups, and state legislators promoted the idea of a workers’ “right-to-know” about chemical exposures and associated dangers.[19] The federal Occupational Safety and Health Administration (OSHA) had issued standards specifying limits on levels of benzene, lead, and some other extremely toxic chemicals, but promulgating separate standards for hundreds of thousands of hazardous chemical products seemed impractical. Instead, labor and other public interest groups pressed for an approach based on greater transparency.

In 1981, the Carter administration proposed a disclosure requirement that would have applied "to virtually all businesses that used hazardous substances."[20] The Reagan administration, however, proved more hostile to greater transparency, prompting unions to shift their lobbying efforts from the federal to the state level. As a result, many states – including New Jersey, Pennsylvania, and Illinois – adopted their own right-to-know laws by the mid 1980s.[21] At that point, industry groups supported adoption of a uniform federal standard as an alternative to variable state right-to-know laws, and the federal hazard communication standard was adopted in 1983. The Reagan administration narrowed the initial rule to require only manufacturing firms to disclose chemical information.[22] OSHA argued that manufacturing amounted to 32 percent of total employment and accounted for more than 50 percent of illnesses caused by exposure to chemicals.[23]

The requirement created a two-part chain of disclosure. First, chemical manufacturers and importers evaluated the hazardousness of the substances they produced or imported and disclosed that information to employers who purchased their products. Second, employers made the information available to workers who handled hazardous substances. Manufacturers and importers attached to containers of hazardous chemicals descriptive labels listing the identity of the substance, a hazard warning, and the company’s name and address. Chemical manufacturers also provided employers with material safety data sheets that contained more extensive information about chemical identity, physical and chemical characteristics, physical and health hazards, precautions, and emergency measures. Finally, in plants where workers were exposed to hazardous substances, employers were required to provide the data sheets and train employees in accessing chemical information, protecting themselves from risk, and responding to emergencies.

Many labor and consumer groups were unsatisfied with the disclosure system’s limited scope, however. Soon after its approval, United Steelworkers of America, AFL-CIO-CLC, and Public Citizen attacked the standard’s narrow scope and preemption of sometimes stronger state right-to-know laws. Rejecting the Reagan administration’s rationale for limiting disclosure to manufacturing firms, the U.S. Court of Appeals in 1985 directed the secretary of labor to extend disclosure to all sectors. In 1987, a new court ruling confirmed that all industries where employees were potentially exposed to hazardous chemicals had to comply with the disclosure requirements. By 2004, OSHA estimated that over 30 million American workers were exposed to hazardous chemicals in their workplaces and that the hazardous chemical reporting system affected around 3 million workplaces and 650,000 chemical substances.[24]

Over time, chemical manufacturers improved their disclosure of chemical hazards. Manufacturers responded to employers’ requests for additional chemical information and sought to limit their potential liability for willfully hiding information on dangerous chemicals.[25] A 1992 study by the GAO found that 56 percent of surveyed employers reported “great” or “very great” improvement in the availability of information, an d 30 percent said they substituted less-hazardous chemicals because of the information they received.[26]

Material safety data sheets became a routine method of conveying product information about both hazardous and nonhazardous chemicals. Many firms now post on the Internet data sheets for all their products, and a number of Web sites offer searchable databases. Some manufacturers use disclosure as a competitive tool, offering their customers more information than OSHA requires, including guidance on how to comply with disclosure requirements, training materials, and experts to assist customers.[27]

Manufacturers and employers also improved the quality of the reported information. Responding to criticism about the quality of material safety data sheets, the Chemical Manufacturers Association convened a committee to develop guidelines for the preparation of such sheets. Their effort contributed to the adoption of a voluntary industry standard for these sheets in 1993, which was subsequently endorsed by OSHA.

Despite progress of this kind and several OSHA guidelines aimed at improving disclosure, chemical hazard disclosure ranked second in the list of the ten most violated OSHA standards in 2005, accounting for over 8 percent of all violations.[28]

The extent to which workers comprehend disclosed information about chemical hazards and take protective measures in response also remains unclear. Surveys have shown that employees are generally able to understand only around 60 percent of information in chemical data sheets,[29] with more-educated workers doing significantly better than those who are less educated.[30] Even in cases where workers understand safety information, surveys suggest that they often make only limited use of it.[31] It is also interesting to note that all documented cases suggesting that training and information disclosure have a positive impact on workers’ behavior involve unionized firms where labor organizations may have played an intermediary training or information-disseminating role.[32]

At the international level, OSHA played an important role in the development of an international format for chemical classification and labeling, leading to the United Nations' adoption of a globally harmonized standard in 2002.[33] That standard, scheduled for implementation by 2008, had not yet been adopted in 2006 by OSHA for use in the United States, however.

Disclosing Toxic Releases to Reduce Pollution
Following a tragic accident at a pesticide plant in Bhopal, India, in 1984, in which deadly gas killed more than two thousand people in surrounding areas and injured more than a hundred thousand, the U.S . Congress required manufacturers that produced or used large quantities of a selected list of toxic chemicals to report annually on quantities of their release into the air or water or onto land, chemical by chemical and factory by factory. The company disclosures were assembled by the federal Environmental Protection Agency (EPA) in a Toxics Release Inventory (TRI). The Bhopal disaster provided the immediate impetus for toxic pollution disclosure. But the idea that the public had a right to know about toxic pollution in communities was also rooted in a decade of work by labor and community groups aimed at disclosing workplace and community hazards.[34]

The new requirement represented a hastily constructed political compromise tacked onto a larger legislative effort to provide an emergency response system for chemical accidents. Disclosure was supported by key senators – Robert Stafford (R-Vt.), Frank Lautenberg (D-N.J.), and Lloyd Bentsen (D-Tex.) – and by right-to-know and environmental groups. However, manufacturers sought successfully to narrow its scope by limiting the chemicals to be reported and the manufacturers required to report, excluding reporting of toxic chemical use (as opposed to release into the environment), and allowing companies to estimate releases using a variety of techniques that could be changed without notice. The EPA initially saw the disclosure system as a burdensome paperwork requirement.

Over time, however, toxic pollution disclosure provided an important bridge between traditional right-to-know measures and newer targeted transparency systems. When disclosure caused some large companies to make voluntary, immediate, and drastic reductions in toxic pollution, federal officials began to refer to the requirement as one of the nation's most successful environmental regulations. By the late 1990s, the disclosure system was credited with reducing toxic releases by nearly half in little more than a decade.[35]

The dynamics of toxic pollution disclosure reflected shifting political priorities. In the 1990s, the Clinton administration substantially strengthened disclosure by increasing the number of chemicals covered, lowering thresholds for reporting of particularly hazardous chemicals, and requiring federal facilities, power plants, and mining operations to report.[36] However, the George W. Bush administration asked for cutbacks in reporting in 2006. The administration proposed relieving nearly four thousand companies from detailed reporting and suggested reducing reporting to every other year as a cost-cutting measure.[37]

Weaknesses in the disclosure system persisted. Disclosure metrics (releases in pounds) did not help citizens assess toxicity or exposure and therefore could not create incentives to reduce risks efficiently.[38] In addition, companies used different estimating techniques, data accuracy remained uncertain, and, despite advances in information technology that made near real-time reporting feasible, timeliness of disclosure remained a serious problem. Factory-by-factory toxic pollution for calendar year 2004 was not reported to the public until April 12, 2006.[39]

Disclosing Nutritional Information to Reduce Disease
The Nutrition Labeling and Education Act of 1990 (NLEA) required food processors to label products with amounts of key nutrients as a public health measure.[40] Chronic diseases such as heart ailments, cancer, and diabetes were the largest causes of preventable deaths in the United States, killing more than 1.5 million people each year. Scientists agreed that the single most important factor in preventing and minimizing the effects of such diseases was improved diet. Before Congress acted, however, consumers had no way to assess the healthfulness of most packaged foods. Supporters of the law hoped that it would create new incentives for Americans to eat healthier foods and for manufacturers to market healthier products.[41]

Consumer groups combined with organizations such as the American Cancer Society and the American Heart Association to promote nutritional labeling as a public health measure rather than simply a right-to-know cause. Entrepreneurial members of Congress, led by Representative Henry Waxman (D-Calif.) and Senator Howard Metzenbaum (D-Ohio), pressed for the new labeling law. The food industry supported disclosure both as preferable to conflicting state requirements and as a means to reap profits from marketing healthy products.

The new law required food processors to label in standardized formats amounts in each serving of total fat, saturated fat, cholesterol, sodium, total carbohydrates, complex carbohydrates, sugars, dietary fiber, and total protein, in the context of amounts recommended for cons umption as part of a daily diet. Companies also had to list total calories and calories from fat in each serving. Serving sizes were standardized to conform to amounts customarily consumed. Products that were not labeled accurately and completely could be deemed misbranded by the federal Food and Drug Administration and removed from the market. In 1994, when the law took effect, interested shoppers could compare nutrients in virtually every can, bottle, or package of processed food for the first time. The law was appropriately heralded as the most important change in national food policy in fifty years.[42]

However, Congress also gerrymandered the labeling requirement to satisfy powerful interests, exempting nearly half of consumers’ food purchases. Fast-food outlets, full-service restaurants, fresh meats and seafood, deli items, and dietary supplements all escaped labeling.[43]

Nutritional labeling did improve over time – but only slowly and sporadically. Often labeling failed to keep pace with new science. Scientists had known since the 1970s that trans fatty acids were the most health-threatening fats, for example. The FDA, however, did not require their listing on food labels until 2006.[44] Major food allergens, too, were not clearly labeled until 2006.[45] And labels continued to group together all carbohydrates despite evidence that complex carbohydrates were healthier than simple carbohydrates. In a particularly serious limitation, the risks and benefits of dietary supplements remained largely undisclosed.

Disclosing Medical Mistakes to Reduce Deaths and Injuries

Despite an urgent call by the prestigious Institute of Medicine in 1999 for a new disclosure system to reduce medical mistakes in hospitals, federal moves to increase transparency have been slow and contentious, and state reporting requirements have proven difficult to sustain.[46]

In its 1999 report, the institute, an arm of the congressionally chartered National Academy of Sciences, concluded that between 44,000 and 98,000 patients died in the United States annually as a result of hospital errors.[47] In addition, as many as 938,000 hospital patients were injured each year by such errors. High rates of error were costly not only in deaths and injuries but also in loss of trust by patients in the health-care system, loss of morale by health-care professionals, loss of productivity by the workers who were their victims, and in many other ways. In economic terms alone, estimated national costs of preventable hospital errors resulting in injury or death totaled between $17 and $29 billion a year.[48]

Instead of new rules or stiff penalties for doctors, the institute called on Congress and state governments to require standardized public disclosure by health-care organizations of incidents where mistakes resulted in death or serious injury. Public disclosure would hold providers accountable for serious errors, create incentives to reduce them, and inform patients’ hospital choices. The report also recommended that Congress take action to encourage voluntary and confidential reporting by doctors, nurses, and other health-care workers of less serious errors and near misses.[49]

Response was immediate. President Bill Clinton announced that he favored national action to reduce medical errors by 50 percent in five years, as the institute’s panel had recommended. National news reports featured the institute’s troubling findings about the frequency of medical mistakes and officials’ commitments to take action. Weeks after the report was released, a poll taken by the Kaiser Family Foundation found that an astonishing 51 percent of respondents were aware of it.

However, conflicting interests created a political stalemate that blocked disclosure. The apparent consensus for national action splintered into battles among groups representing doctors and hospitals, public health advocates, state officials, consumer groups, and trial lawyers. When the debate got down to specifics, the American Medical Association and the American Hospital Association opposed the kind of hospital-by-hospital disclosure of serious errors that would be meaningful to consumers, although the American Nurses Association and a variety of consumer groups supported such transparency. Organizations representing health-care providers argued that information about errors should have broad protection from discovery in lawsuits. On that issue they were opposed by the American Trial Lawyers Association, which sought to narrow such confidentiality. Agreement on legislation remained elusive.[50]

These unsuccessful efforts to institute a national hospital-specific reporting system came in the wake of some limited reporting initiatives by a few states in the early 1990s. Most state hospital-specific public reporting systems reported patient outcomes (mortality rates, for example) rather than medical mistakes and focused on narrow subsets of medical procedures rather than on the comprehensive system proposed by the Institute of Medicine report. Among the strongest state systems were New York’s Cardiac Surgery Reporting System, adopted in 1989, which provided both hospital- and doctor-level information on patient outcomes for that procedure,[51] and Pennsylvania’s requirement in 1992, which provided information regarding mortality, mor bidity, and other patient treatment outcomes related to coronary artery bypass surgery.[52]

However, as of 2006, most state and federal efforts continued to focus on confidential reporting or on reporting that aggregated hospital data, rather than on public disclosure of facility-specific information about medical mistakes that could help patients make informed choices and bring public pressure to bear on hospital safety. In 2005 Congress approved the Patient Safety and Quality Improvement Act, which provided a framework for voluntary reporting of medical errors by hospitals to state data centers but also established strong confidentiality requirements.[53] Twenty-three states collected data on medical errors, but virtually all required that information remain confidential. An exception was Minnesota, which required in 2002 that medical errors be reported to the public, hospital by hospital.[54] Periodic audits suggested that even confidential reporting was often late or inaccurate.[55]

More-general quality-of-care rating systems fared better. By 2006, the federal Department of Health and Human Services as well as the Joint Commission on Accreditation of Healthcare Organizations had nascent systems that ranked hospitals on the basis of Medicaid and Medicare data and surveys.[56]

Disclosing Sex Offenders’ Residences to Improve Public Safety
In response to public outrage following the rape and murder of a seven-year-old girl named Megan Kanka by a released sex offender, New Jersey approved legislation in 1994 requiring disclosure of the places of residence of released sex offenders. Two years later, the federal Megan’s Law was enacted. It required that all states release information to the public about known convicted sex offenders. States were given considerable discretion in how information would be provided, how frequently it would be updated, and how detailed it would be. The federal law amended an earlier statute that required states to maintain registries of released sex offenders.[57]

By 2006, all fifty states and the District of Columbia had created some form of sex offender registry and had provided for community notification of offenders’ places of residence.[58] Notification methods varied widely from state to state, from active communication by police via door-to-door visits, mailings, and community meetings, to notice via hotlines or Web sites.[59] The constitutionality of state laws in Connecticut and Alaska was upheld by the Supreme Court in 2003 after lower courts struck them down as violations of due process and on other grounds.[60]

Washington State’s sex offender registration and notification system, the state system that we have analyzed for this book, predates both federal statutes. The state’s 1990 Community Protection Act was based on a finding that “sex offenders pose a high risk of engaging in sex offenses even after being released from incarceration”[61] and aimed to provide notice about the current residence of released sex offenders as a means of reducing risks to individuals and the community.[62]

In order to provide “necessary and relevant information” to the public, the law required that any adult or juvenile convicted of any sex or kidnapping offense register with the county sheriff’s department within twenty-four hours of release or thirty days of becoming a new state resident.[63] Offenders were required to provide their name, address, date and place of birth, place of employment, information about the crime, a photograph, and other personal data.[64] Those convicted of Class A felonies remained on the list throughout their lives, while those convicted of lesser crimes remained on the list for ten or fifteen years. Failure to register or provide accurate information was deemed a class C felony or gross misdemeanor, depending on the severity of the original crime.[65]

Community notification was provided through mailings, direct notification by the police, and the Internet. Washington was one of the first states to provide an Internet-based system for searching and locating individuals on the registry, which includes photographs of offenders.[66] Members of the public are given essentially unlimited access to personal information on offenders, including their conviction records. The state’s Web site does caution that “[t]he information ...should not be used in any manner to injure, harass, or commit a criminal act against any individual named in the registry, or residing at the reported address. Any such action could subject you to criminal prosecution.”[67]

Washington’s sex offender disclosure system has become more rigorous over time. The law has been amended to allow police to disclose relevant information to public and private schools, child and family day care centers, and businesses and other organizations that primarily serve children and community groups. State officials have increased the amount of information required and tightened the timeliness of submission and requirements for updating changes in residence. As of March 31, 2006, 18,943 sex and kidnapping offenders were listed on the Washington public registry. The state does not estimate compliance rates. Parents for Megan’s Law, a national organization that monitors state-level Megan’s Laws, estimates that about one-quarter of sex offenders nationally fail to comply with state registration requirements.[68]

Disclosing Contaminants to Improve Drinking Water Safety
Under the Safe Drinking Water Act of 1974,[69] the federal EPA set maximum safe contaminant levels for drinking water and required water systems to notify customers of violations.[70] However, in practice such notification often did not take place.[71] Public attention focused on the health risks associated with contaminated water in 1993 after the largest outbreak of waterborne disease on record in the United States. In Milwaukee, Wisconsin four hundred thousand people became sick, forty-four hundred were hospitalized, and more than fifty died from drinking water contaminated with a microbe called cryptosporidium.[72]

In response, Congress in 1996 amended the federal Safe Drinking Water Act to require that water suppliers, starting in October 1999, provide customers with annual reports on contamination. The annual reports included information on the source of tap water, contaminants found in the water, sources of contamination, and violations of EPA maximum contaminant levels. Their purpose was to allow consumers to make better choices concerning their use of tap water and to encourage water utilities to be more vigilant in minimizing contaminants.[73]

The Milwaukee incident was not the only driver of greater transparency. Americans were losing confidence in their public water supplies. Surveys in the late 1990s found that only three-quarters of Americans regularly drank tap water, and 65 percent increasingly used bottled water or filtered water at the tap.[74] Experts suggested that drinking water contaminants were responsible for as many as one-third of nine hundred thousand “stomach flu” illnesses each year.[75]

Contamination levels varied widely with seasons, rainfall, and waste discharges. Sometimes chemicals and microbes entered systems as water flowed to homes through century-old pipes.[76] The EPA stated in 2004 that 27 of the 834 water systems serving more than fifty thousand people had exceeded federal safety standards for lead at least once since 2000.[77] The water system ser ving the nation’s capital had failed to comply with sampling req uirements a nd had failed to report to cons umers that mo re than 10 percent of tap water samples since 2000 exceeded federal lead levels.[78]

Transparency requirements proved too weak to help residents assess risks or compare the safety of different water systems, however. They did not require consistent protocols, units of measurement, or formats for reporting contaminants. In 2003, an analysis of drinking water reports in nineteen cities by the National Resources Defense Council found that some cities buried or omitted information about health effects of contamination or warnings to consumers with compromised immune systems, all omitted information about specific polluters, fewer than half offered reports in languages other than English, and many made sweeping and inaccurate claims about water safety despite violations of federal contaminant levels.[79]

As of 2006, the drinking water contaminant disclosure system appeared to be unsustainable. Reports had improved little over the years in scope, quality, or use. Interestingly, new emphasis on homeland security raised the possibility of requiring more timely monitoring (and perhaps disclosure). In 2004, experts convened by the federal Government Accountability Office ranked “near real-time monitoring technologies” to detect contaminants as the highest priority in improving drinking water security.[80] Two years earlier, the National Academy of Sciences rated improved monitoring technologies as one of four top security priorities for drinking water supplies.[81]

Disclosing Restaurant Hygiene to Protect Public Health
On November 16, 1997, the CBS affiliate in Los Angeles, KCBS, broadcast the first of a three-part series regarding restaurant hygiene. Using the increasingly popular “hidden camera” technique, the local news exposé took viewers behind the scenes into a number of restaurant kitchens.[82] The series revealed a smorgasbord of unsanitary practices that – according to the series – were common in restaurants throughout Los Angeles County, despite the presence of an aggressive restaurant hygiene monitoring system maintained by the county. The anecdotal evidence in Los Angeles, however, was indicative of a more widespread problem. Food-borne diseases cause an estimated 325,000 hospitalizations and 5,000 deaths each year in the United States. The Centers for Disease Control (CDC) estimates that nearly 50 percent of food-borne disease outbreaks are connected to restaurants or other commercial food outlets.[83]

The public outcry arising from the investigative series led the Los Angeles County Board of Supervisors to legislate transparency to inform the public about hygiene conditions in all restaurants in the region. They unanimously adopted a disclosure requirement on December 16, 1997 (one month after the series was aired), which went into effect on January 16, 1998. The county ordinance requires public posting of restaurant hygiene grades (A, B, or C) based on Los Angeles County Department of Health Services (DHS) inspections. By making these grades public, the Board of Supervisors sought to reduce the effects of food-borne diseases by putting competitive pressure on public eating establishments with poor hygiene practices.[84] Not surprisingly, the requirement was opposed by the California Restaurant Association (a statewide trade group), as well as by many local restaurant associations. Although the transparency requirement was adopted at the county level, individual cities within the county were not required to adopt the ordinance (all but ten had chosen to do so by the end of 2005).[85]

The system builds directly on the health inspections conducted regularly by the DHS. Health inspections cover a range of very specific practices, including food temperatures, kitchen and serving area handling and preparation practices, equipment cleaning and employee sanitary practices, and surveillance of vermin.[86] Each violation receives one or more points. Cumulative points are then deducted from a starting score of 100. A score from 90 to 100 points receives an A, 80 to 89 a B, and 70 to 79 a C.[87] Cumulative scores below 70 require immediate remediation by the restaurant owner, which may include suspension of the owner’s public health permit and closing of the restaurant.[88]

The transparency system requires restaurants to post the letter grade arising from the most recent inspection on the front window.[89] A searchable Web-based system includes inspection grades, numeric scores on which the grades were based, and a listing of specific violations found on the last inspection. Restaurants receive two or three unannounced inspections and one reinspection, upon request, per year. Thus, although the posting of grade cards entails relatively small costs, the system relies on a large number of inspections (about seventy-five thousand in 2003) and therefore means a sizable enforcement budget for the DHS.

The introduction of the new transparency system led to fairly rapid and significant changes in the overall grade distribution in county restaurants (as noted, the grading system existed before the disclosure requirement). When the program began, 58 percent of restaurants received an A grade, a number that grew to 83 percent by 2003. The incentives to improve are significant. Jin and Leslie report that after grade posting became required, restaurants receiving an A grade experienced revenue increases of 5.7 percent (other factors held constant); B grade restaurants had increases of 0.7 percent; and those with a C grade had declines in revenue of 1 percent.[90] The introduction of grades also improved hygiene at franchised units in chain restaurants, whereas franchised units tended to have lower hygiene than company-owned restaurants.[91]

More important, studies found significant decreases in food-borne-illness hospitalizations, ranging from 13 percent (Simon et al., 2005) to 20 percent (Jin and Leslie, 2003).

The system is not without its problems. There is some evidence that inspectors have become more lenient over time.[92] There is no systemic evidence of corruption in grading, although the economic incentives for it are significant, given the high stakes involved in res tau rant grades.[93] Some critics of the system have argued that it is incompatible with the standard food preparation practices of certain ethnic groups who therefore face an unfair disadvantage from the grading system.[94]

Several other cities in the United States have similar restaurant hygiene disclosure systems.[95] While eight states had introduced legislation requiring posting grade cards, as of 2005 only Tennessee and North Carolina had statewide systems.[96]

Disclosing Rollover Propensities to Improve Auto Safety
In 2000, a series of widely reported traffic fatalities associated with rollovers of popular sport utility vehicles (SUVs) drew national attention. These incidents, which also involved sudden tread separation in certain lines of Firestone tires, highlighted a more general public safety problem. SUVs were more likely than sedans or station wagons to roll over, and some SUVs were much more likely to roll over than others.[97]

Improving public understanding of the propensity of vehicles to roll over was important because rollover accidents remained the most deadly auto accidents in the United States and were increasing. Rollovers accounted for less than 4 percent of all auto accidents but accounted for about a third of driver and passenger fatalities (61 percent of SUV fatalities and 22 percent of passenger-car fatalities). From 1991 to 2001 the number of drivers and passengers killed in all automobile accidents in the United States increased by 4 percent, while deaths in rollover accidents increased by 10 percent. Light-truck (including SUV) rollover fatalities increased 43 percent, whereas passenger-car rollover fatalities declined 15 percent.[98]

Improving public understanding of rollover risks was also important because federal rules did not set any minimum safety standards for new-model rollover performance, as they did for front and side impact crashworthiness. The auto industry had successfully opposed such a standard for two decades.[99]

In response to the Firestone/SUV accidents, Congress approved a new targeted transparency system aimed at informing car buyers’ choices and providing incentives for manufacturers to design vehicles less prone to rollovers. The Transportation Recall Enhancement, Accountability, and Documentation (TREAD) Act of 2000 required public disclosure of the rollover propensity of each new-model car and SUV as measured by government tests.[100]

Regulators required rollover ratings to be presented in a simple five-star format that paralleled the existing star rating systems for front and side impact crashworthiness.[101] A five-star vehicle had a 10 per cent or less chance of rolling over while a one-star vehicle had a 40 percent or more chance of rolling over.[102]

The new law and regulations added other disclosure requirements. They required tire pressure monitoring sensors by 2004, automakers’ disclosure of information on customer complaints and other early indications of safety defects,[103] and new labels to make it easier for car owners to see if their tires had been recalled.[104]

Disclosure improved over time. The TREAD Act included an innovative provision that required that the government’s initial mathematical modeling of rollover propensity be replaced with a road test that would more accurately mimic real-world driving conditions; Congress also directed the National Academy of Sciences to study possible tests quickly and required regulators to consider the academy’s recommendations.[105] As a result, officials instituted a more accurate test in 2004 that combined modeling with driving maneuvers.[106]

In 2005, Congress further increased consumer access to rollover information by requiring that rollover ratings be posted on new-car stickers in auto showrooms.[107]

Early evidence suggested that auto rollover disclosure helped to inform consumers and encourage safer new-model design. Five years after the requirement was introduced, only one model (the Ford Explorer Sport Trac) received as few as two stars, while twenty-four models earned four-star ratings.[108] Congress’s Government Accountability Office concluded that ratings were “successful in encouraging manufacturers to make safer vehicles and providing information to consumers.”[109] Manufacturers used ratings as a marketing tool in television and print ads.[110]

Interestingly, this targeted transparency system also helped to change the politics of auto safety regulation. By encouraging manufacturers to accelerate introduction of new stabilizing technology, the rollover rating system reduced industry opposition to a minimum safety standard for rollovers. In 2005, Congress directed regulators to issue such a standard.[111]

However, as of 2006, the rollover rating system still had significant weaknesses. The system relied on government rather than manufacturer tests. As a result of budget and logistical constraints, not all new-model cars were tested, and some test results were not available until late in the model year.[112]

Ratings also did not allow consumers to compare the safety of specific models across weight classes.[113]

In addition, backup data for star ratings remained difficult to access. Consumers had to delve into the government’s docket management system or research and development Web page, or visit a National Crash Analysis Center in Washington, D.C.[114]

Rollover star ratings themselves remained controversial, as did the longer-established star ratings for crashworthiness in front and side impacts. The Transportation Research Board, as well as consumer groups and auto insurance associations, charged that star ratings gave consumers a falsely positive impression of safety, since one-star vehicles could have a 40 percent chance of rolling over, and that ratings diminished in usefulness when most vehicles earned four or five stars.[115]

Disclosing Terrorism Threats to Improve Public Safety
Six months after the attacks of September 11, 2001, the Bush administration created a color-coded ranking system to inform the public about terrorist threats. The system’s stated purpose was “to provide a comprehensive and effective means of disseminating information regarding the risk of terrorist acts to Federal, State, and local authorities and to the American people” in order “to inform and facilitate decisions appropriate to different levels of government and to private citizens at home and at work.” The aim was to minimize attacks and their consequences. The system was designed to be flexible and information-based. It provided a framework for communicating the severity of national, local, or sector-specific threats as well as their likely character and timing.[116]

The alert system established five color-coded levels of terrorist threat: green – low; blue – guarded; yellow – elevated; orange – high; red – severe . The presidential directive clearly contemplated that alerts would be accompanied by factual information.[117]

The directive also made it clear that information was intended to create incentives for action. Each level of alert was meant to trigger threat-specific protective measures by government agencies, private organizations, and individuals. The directive provided that threat levels would reflect both the probability and the gravity of attack and would be reviewed at regular intervals to see if they should be adjusted. The level set was to be based on the degree to which a threat was credible, corroborated, imminent, and grave.[118]

The system provided flexibility. Threat levels could be set for specific geographical areas or for specific industries or facilities. The system provided for case-by-case judgments about whether threat levels would be announced publicly or communicated in a more limited way to emergency officials and other selected audiences. The stated intent was to “share as much information regarding the threat as possible, consistent with the safety of the Nation.”[119]

Once the Department of Homeland Security (DHS) was created in March 2003, the secretary of homeland security was charged with responsibility for setting threat levels, with the advice of the Homeland Security Council.[120]

Within the department, the warning system was administered by an undersecretary for information analysis and infrastructure protection.

As of early 2006, the terrorist threat warning level had been raised and lowered seven times, each time from yellow (elevated) to orange (high) and back again. The system gen- erally produced warnings that proved too vague to provide government officials, business managers, or ordinary citizens with incentives to take appropriate protective actions.

However, alerts were increasingly specific. On August 2, 2004, the Department of Homeland Security issued a warning concerning three particular facilities: the Prudential building in Newark, New Jersey, and the headquarters of the World Bank and International Monetary Fund in Washington, D.C . On July 7, 2005, when several bombs were detonated in the London subway system, DHS raised the threat level from yellow to orange for mass transit only, though noting that the government had “no specific, credible information” to suggest that an attack in the United States was imminent.[121]

The system worked differently for different audiences. When a decision was made to change the threat level, department officials notified federal, state, and local agencies electronically or by phone and also called chief executives of major corporations, using a secure connection maintained by the Business Roundtable.[122]

DHS also developed channels for communicating threat information without raising the overall threat level. The department issued threat advisories or less urgent information bulletins for specific locales or sectors. Access to these communications was often restricted, however, leaving the public uninformed. Officials explained that such information was shared on a need-to-know basis, since it was often derived from classified sources. A GAO review of a sample of secret threat advisories in 2004 concluded that they contained “actionable information about threats targeting critical national networks, infrastructures, or key assets such as transit systems.”[123]

In practice, however, the terrorist threat warning system remained problematic. Several in-depth evaluations and surveys found that rankings were little used by its intended audiences. The Gilmore Commission, a broad-based congressional commission charged with continuing oversight of domestic responses to terrorism, concluded in 2003 that “[t]he Homeland Security Warning System has become largely marginalized.” On occasion, governors and mayors declined to elevate threat levels or take other federally recommended actions. Public and private groups expressed frustration at the lack of information about the character and location of threats.The commission recommended the creation of a regional alert system featuring specific guidance, as well as training local officials for responses to each threat level.[124]

A report by the nonpartisan Congressional Research Service in 2003 concluded that threat alerts were so vague that the public “might begin to question the authenticity” of threats and therefore ignore them. The report noted that the government “has never explained the sources and quality of the intelligence upon which the threat levels were based.”[125]

Government officials have rarely received information specific enough to act upon. A survey by the General Accounting Office in 2004 found that sixteen of twenty-four federal agencies had received information about elevated threat levels from the media before they received it from homeland security officials.[126] One of the potential strengths of the alert system was that it was constructed to work synergistically with regulatory requirements. Each federal department was required to come up with its own protective measures appropriate to each threat level and to take those actions each time the threat level was raised. However, federal agencies sur veyed by the GAO reported that changes from yellow to orange had minimal impact on their practices, since they maintained high levels of security at all times.[127]

State officials, too, reported that they received much of their information about changed threat levels through the media and got little specific information from the government. The GAO survey found that fifteen of forty states learned about threat level changes from the media before they heard from federal officials in at least one instance. State and local officials reported that learning about threats at the same time as the public could carry heavy political costs. State officials also noted that they received conflicting advice from different federal authorities about what actions to take.[128] The most serious failing of the transparency system has been its lack of meaningful information and guidance. Local officials, always on the front lines in preparing for and responding to disasters, n eed accurate, specific, and timely information. A report by the minority staff of the Senate Governmental Affairs Committee concluded in 2003 that two years after the attacks on the World Trade Center and the Pentagon, state and local officials had too little information to respond to ter rorist attacks. The report noted that effective communication channels still had not been established with state and local officials, so states and localities had no effective way of communicating with one another or of learning from the successes or mistakes of others.[129]

A June 2004 report by the nonpartisan GAO echoed these themes. It suggested that warnings would be more effective if they were more specific and action-or iented; communicated through multiple methods; included timely notification; and featured specific information on the nature, location, and timing of threats as well as guidance on actions to take in response to threats.[130]

The public remained confused. Information accompanying increases in the threat level often has been vague or irrelevant to the daily activities of most Americans. Most state governments and many local governments have developed their own alert systems which are not necessarily consistent with the federal system. The administration has also sent mixed messages to the public concerning what actions to take. In raising the threat level to orange on September 10, 2002, for example, Secretary Ridge told people to “continue with your plans” but “be wary and be mindful.”[131]

In June 2003, Ridge acknowledged that the system needed improvement. “We worry about the credibility of the system…we want to continue to refine it, because we understand it has caused a kind of anxiety.”[132]

Members of Congress from both parties expressed growing impatience with vague and conflicting messages. After the government raised the threat level to orange over the 2003 Christmas holidays and told citizens to be vigilant but continue their daily routines, Christopher Shays (R-Conn.) asked: “Why would the department tell people to do everything they would normally do? ...We’re at high risk.” Christopher Cox (R-Calif.), chairman of the House Select Committee on Homeland Security, noted that vague warnings could also cause too much action, citing evidence that groups had canceled field trips and other activities.[133]

Senator Frank R. Lautenberg (D-N .J.) noted that “the system may be doing more harm than good.” [134]

Public confusion was reflected in polls. A Hart-Teeter poll sponsored by the Council for Excellence in Government in March 2004 found that 73 percent of those polled were anxious or concerned about terrorism and 34 percent had looked for information about what to do in the event of an attack, but only one person in five was aware of state or local preparedness plans.[135]

Earlier Fox News polls found that 78 percent of those responding did not know or said they were not sure what the current threat level was and that 90 percent responded to recent elevation of the threat level by going about their lives as usual.[136]

A New York Times poll in October 2004 found that nearly two-thirds of those responding did not have emergency kits prepared and more than two-thirds did not have communication plans. Philip Zimbardo, the president of the American Psychological Association, suggested that the terrorist threat system had turned the United States into a nation of “worriers, not warriors,” by “forcing citizens to ride an emotional roller coaster without providing any clear instructions on how to soothe their jitters.” He noted that a large body of research suggested that effective safety measures required a credible source, information about the particular event that created a threat, and information about specific actions citizens could take to reduce risks.[137]

Improving the Quality and Fairness of Critical Services and Processes


Disclosing Union Finances to Minimize Corruption
In the 1950s, about one-third of the U.S. workforce in the private sector was unionized (as compared to 8 percent in 2005), and unions represented the majority of workers in steel and auto manufacturing, trucking, construction, food processing, and other industries central to the economy. Union leaders like John L. Lewis, Walter Reuther, George Meany, and Jimmy Hoffa were well-known national figures. The considerable economic and political influence exercised by labor unions provoked concern in the business community and in Congress.[138]

In 1957, congressional hearings chaired by Senator John L. McClellan (D-Ark.) focused on one source of concern: bribery, fraud, and other forms of racketeering in parts of the labor movement. The two-year, high-profile, and often sensational Senate investigations revealed corruption in a number of major labor organizations and resulted in calls for government intervention in union governance.[139] In this crisis atmosphere, Congress debated different methods to improve standards of democracy, fiscal responsibility, and transparency in private-sector labor organizations.

Political compromise produced the Labor Management Reporting and Disclosure Act (LMRDA), which created standards for democratic governance and required unions to periodically reveal detailed information regarding financial practices and governance procedures.[140] Disclosure requirements were relatively narrow in scope, focusing on union balance sheets, loan activities, officer salaries, and line-item disbursements (e.g., for employee salary and benefits, administrative expenses, and rent and operating expenses) rather than on programmatic expenditures at the national and local union level.[141] A division of the U.S . Department of Labor, the Office of Labor Management Services (OLMS), was created to enforce the law, including its disclosure provisions.[142] The penalties associated with failing to provide timely and accurate reports were significant. From the start, disclosure imposed substantial costs on union officers but offered few benefits to them, creating incentives for officers to provide minimal information.[143]

For most of the disclosure requirement’s history, it was difficult and costly for union members to gain access to the information that was ostensibly made public.[144] They had to go to a reading room at the Labor Department in Washington, D.C., or to a regional office, or make a request by mail, paying a per-page charge.[145] Even then, information remained fragmented. Regional offices carried only records relating to union affiliates in their geographical area.[146] Most union members were unaware that the information existed, and even for those who learned about it, reporting forms proved technical and difficult to interpret.[147]

These high costs to individual information users created a potential role for intermediaries. But, as of 2006, it remained uncommon to find formal groups within unions that could act independently of incumbent officers and were capable of playing an intermediary role. Employers, too, rarely used the information from the disclosure system to discredit unions – they had more effective tools at hand. The decline of union strength beginning in the early 1980s also made many in the labor movement reluctant to “air dirty laundry” in public for fear of providing ammunition to antiunion employers and damaging public support for the labor movement.[148]

With high costs to information disclosers and users, and few intermediaries available to lower user costs, it is not surprising that the scope, accuracy, and use of this disclosure system did not improve much in forty years. The only significant expansion in scope occurred with the passage of legislation that created similar access to union financial information for federal government workers and the addition of reporting requirements for financial institutions that made loans to unions.[149]

Accuracy or timeliness of the disclosed information improved little. The financial categories and definitions remained the same, as did the level of required financial detail.[150] And despite strong enforcement provisions, the annual delinquency rate in filing reports was 25 percent, the GAO found in 2000. The likelihood of a recordkeeping inspection was small, and most penalties were directed toward unions that intentionally failed to file or that falsified reports.[151]

Overall use of information by rank-and-file union members remained minimal. Contrary to Congress’s expectation that information would be used by union members, most users over the past three decades have been business groups, antiunion consultants, or academics.[152] In 1999, a typical year prior to the creation of Internet-based access, the Labor Department responded to only eight thousand disclosure requests from all sources (out of 13 million union members who were covered by the transparency policy).

The costs of disclosing and particularly of using information, however, fell substantially when Congress appropriated funds in fiscal years 1998 and 1999 to develop and implement electronic filing and dissemination of reports. Over the following three years, the Labor Department developed systems for both filing and accessing disclosure forms via the Internet.[153] As of 2006, unions could file forms electronically, and users could view and print all union financial reports from the year 2000 to the present, search records by a variety of criteria, and request copies from earlier periods via the Department of Labor’s Internet Public Disclosure Room (http://www.union-reports.dol.gov).

The most significant changes to union financial reporting requirements since 1959 came with the election of George W. Bush in 2000. From 2001 to 2006 the Bush administration dramatically increased funding to the Labor Department office that administers the disclosure system (while reducing budgets in much of the rest of the Labor Department), expanding the number of full-time equivalent staff from 290 in FY 2001 to 384 in its proposed FY 2006 budget, and raising overall funding from $30.5 million in FY 2001 to $48.8 million in its proposed FY 2006 budget.[154] The administration cited improving the accuracy and timeliness of union reporting as one of the strategic priorities for this division.

More important, the Bush administration used its authority to issue regulations to alter a variety of reporting requirements.[155] These included expanding reporting for smaller labor unions; requiring electronic filing; and changing the way that financial information is provided by, for example, requiring that unions disclose information on all services purchased for five thousand dollars or more.[156] The new regulations also required reporting of financial information on a programmatic—as well as a line-item—basis (e.g., providing information on the amount of money spent for representation, organizing, and other major union activities).[157] Individual unions and the AFL-CIO opposed many of these changes, arguing that they would substantially increase the costs faced by labor organizations with little additional benefit to union members. They ultimately lost these legal challenges in 2005. 158


Disclosing Campaign Contributions to Reduce Corruption
Public disclosure of campaign contributions to congressional and presidential candidates represents one of the United States’ earliest, most sustainable, and most perennially controversial targeted transparency systems.

From the beginning, the primary purpose of campaign finance disclosure was to reduce corruption in government. In Buckley v. Valeo, the 1976 Supreme Court decision that upheld the constitutionality of federal disclosure requirements, the Court concluded that disclosure reduced corruption in three ways. First, it provided the electorate with information about where money came from and how it was spent, in order to aid voters in evaluating those running for office, including alerting voters “to the interests to which a candidate is most likely to be responsive.” Second, disclosure helped to “deter actual corruption and avoid the appearance of corruption by exposing large contributions and expenditures to the light of publicity.” Such exposure “may discourage those who would use money for improper purposes either before or after the election,” because “a public armed with information about a candidate’s most generous supporters is better able to detect any post-election special favors that may be given in return.” Third, the Court said, reporting was “an essential means of gathering data to detect violations of contribution limits.” 159 Disclosure worked in tandem with a rule-based regulatory system that limited amounts and sources of contributions.

The use of transparency to reduce campaign finance corruption began early and improved in response to episodes of perceived abuses. The first campaign finance disclosure law, the Publicity Act of 1910, 160 was championed by President Theodore Roosevelt and progressive reformers as an antidote to the influence of big business in politics. Roosevelt pressed for disclosure after his opponent in the 1904 election accused him of accepting corporate gifts intended to buy influence in the administration. Civic organizations such as the National Publicity Law Organization kept pressure on Congress until the law was passed. 161

Today’s national system of campaign finance disclosure dates from the Federal Election Campaign Act (FECA) of 1971, which was enacted in response to the perceived ineffectiveness of earlier laws and the growing influence of money in politics. FECA required candidates for national office to disclose contributions of one hundred dollars or more in quarterly reports. In election years, contributions of five thousand dollars or more had to be reported within forty-eight hours and disclosed to the public forty-eight hours after reporting. The law also limited contributions and media expenditures. 162 Allegations of corruption in the 1972 presidential election, including the Watergate scandal, led Congress to expand disclosure requirements in 1974 and to create an independent bipartisan Federal Election Commission (FEC) that received disclosed information and made it available to the public. 163 Later amendments aimed to broaden disclosure and make it more efficient. Reforms required reporting by “527” nonprofit organizations that promoted candidates but were not campaign committees and focused reporting on committees that raised substantial amounts of funds. 164

In 2002, Congress again tightened spending limits and strengthened disclosure. The main purpose of the McCain-Feingold law (officially, the Bipartisan Campaign Reform Act) was to close loopholes that allowed candidates and their supporters to use “soft money” to circumvent campaign spending limitations. Soft money refers to funds used to finance issue ads that promote particular candidates. As part of the effort to regulate “soft money,” Congress required organizations that sponsored candidate-specific issue ads to disclose the names of contributors and spending on such ads. Anyone who “knowingly and willfully” violated disclosure provisions could face a maximum penalty of five years in prison. 165 In McConnell v. Federal Election Commission, decided in 2003, the Supreme Court again upheld the constitutionality of disclosure requirements as an important means of informing voters, reducing corruption, and enforcing spending limits. 166

Campaign finance disclosure remains widely supported in concept but perennially debated in its specifics. Over the years, the system has gained diverse users and the support of many candidates. The press, advocacy groups, political consultants, groups concerned with expa nding public information , a nd other intermediaries often repackage the disclosed data and provide their own interpretations for the public. Federal enforcement authorities use the data to ferret out violations of spending limits. Candidates use the data to gather information about their opponents and sometimes have a reputational interest in disclosing campaign finance information beyond what is required by federal law. In the 2000 and 2004 elections presidential candidates disclosed all their contributions on campaign Web sites. 167

The Internet is fundamentally changing the dynamics of campaigning and of campaign finance disclosure. By 2006, candidates used the Internet to raise money, convene virtual town meetings, collect signatures, reach organizers, and customize email messages to supporters. The campaigns of George W. Bush and John Kerry in 2004 raised $100 million on the Internet, mostly in small donations. Howard Dean, former governor of Vermont, built much of his 2004 presidential campaign on the Internet. Advocacy groups used the Internet to convene online primaries and mobilize supporters and resources. Ordinary citizens used the Internet to share facts, express their views about candidates, and provide contributions. 168

In 2006, Congress and regulators were still struggling to integrate into federal campaign laws changes in campaigning brought about by the Internet. “The rise of the Internet ...changes the fundamentals of political speech,” Trevor Potter and Kirk L. Jowers concluded in an early analysis of election law and the Internet. By making it possible to reach large audiences with rich and customized information at little or no cost, the Internet challenges the premise of election law that controlling and disclosing funding controls corruption. “With no cost of communication, current law has nothing to measure ...[and] the entire mechanism for disclosing political expenditures ...is thrown into question.” 169 The Internet has also created new ways to spread false or misleading information. Sham Web sites proliferated during the 2004 campaign, and both Republicans and Democrats routinely set up sites to post negative information about opponents. 170

In the 1990s and early 2000s, new requirements also employed the Internet and computer technology to provide more timely campaign finance information. In the 1970s, committees made paper or microfilm filings to the FEC, which could be accessed by the public only at FEC headquarters. In the early years of the Internet, the FEC allowed information to be downloaded for a fee. By 2006, most information was required to be filed electronically and was available on the FEC Web site within forty-eight hours free of charge. 171 More difficult questions concerned whether and how to regulate campaign activities on the Internet. In March 2006, the FEC provided some answers by ruling unanimously that most political communication on the Internet was not covered by campaign finance laws. Only paid political Internet ads were covered by such laws. 172 Exempting most political communication on the Internet from regulation was “an important step in protecting grass roots and online politics,” commission chairman Michael E. Toner told the New York Times. 173

Contentious iss ues continued to surround campaign finance disclosure. A report by the Senate Committee on Government Affairs in 1996 described widespread and systematic evasion of disclosure requirements. 174 The FEC’s restricted budget raised continuing questions about the commission’s capacity to monitor and enforce disclosure requirements. Finally, the growth of the Internet raised new issues concerning the appropriate balancing of the public interest in disclosure against the public interest in protecting freedom of expression.

Disclosing Lending Practices to Reduce Discrimination
The Home Mortgage Disclosure Act (HMDA), initially enacted in 1975 and substantially expanded in 1989, 175 required banks to disclose detailed information about their mortgage lending. The law aimed to curb discrimination in such lending to create more equal opportunity to access credit. The disclosure requirement compelled banks, savings and loan associations, and other lending institutions to report annually the amounts and geographical distribution of their mortgage applications, origins, and purchases disaggregated by race, gender, annual income, and other characteristics. The data, collected and disclosed by the Federal Financial Institutions Examination Council, were made available to the public and to financial regulators to determine if lenders were serving the housing needs of the communities where they were located. 176 The Examination Council was an interagency body that included the Federal Reserve System, the Federal Deposit Insurance Corporation, and other agencies. In 2004, as many as 33.6 million loan records were reported by nearly nine thousand financial institutions. 177

Mortgage lending disclosure was part of Congress’s response to activists’ calls, in the later stages of the civil rights movement of the 1960s and 1970s, for greater economic equality. It followed congressional action in 1968 to bar racial discrimination in housing sales or rentals; a settlement negotiated by the Department of Justice to end racial discrimination in the appraisal profession; and approval of the federal Equal Credit Opportunity Act in 1974, which outlawed racial and ethnic discrimination in lending. 178 Community-ba sed orga nizations press ed for disclos ure requirements to aid their local campaigns to end lending discrimination. One of the most prominent figures in this debate was Gale Cincotta, a Chicago-based leader of the fair housing and community reinvestment movement, who founded National People’s Action and the National Training and Information Center, two of the local organizations that documented the retreat of banks from inner-city neighborhoods in the 1960s and 1970s and pressed for more equitable lending. She and other activists found an ally in Senate Banking Committee chair William Proxmire (D-Wis.) . In 1975, Proxmire sponsored a bill requiring disclosure of lending practices. 179 Despite opposition from the banking industry, the requirement was ultimately approved by a narrow margin in both the Senate (47–45) and the House (177–147). 180 Under initial disclosure requirements, banks were required to report minimal data about the geographic location of home loan approvals and purchases. Additional legislation expanded and refined these disclosure requirements. In 1977, Congress approved the Community Reinvestment Act (CRA), which required lending institutions to meet the credit needs of the communities in which they operated and linked community lending records to approval of merger applications. 181 In 1980, Congres s approved the Ho using and Community Development Act, which directed the Federal Financial Institutions Examination Council to serve as a central clearinghouse for mortgage lending data. 182 Finally, in response to the savings and loan crisis of the 1980s, Congress approved in 1989 the Feder al Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA), 183 which sought to stabilize and provide new oversight for the savings and loan industry. Community reinvestment groups lobbied succes sfully to include improvements in disclosure, such as reporting of applications as well as loans; reporting of the race, sex, and income of borrowers and applicants; and reporting by a broader range of mortgage lenders. 184

As Congress expanded the scope and depth of this transparency system, it gained wider use. Advocacy groups used mortgage lending data to document constraints on credit in their communities and to negotiate new mechanisms for low-income lending with individual banks. Broad-based community reinvestment task forces in Washington, Rhode Island, New Jersey, and Michigan forged partnerships among community organizations, lending institutions, and state and local governments to address access problems. Investigative reporters, financial analysts, and intermediaries used the information to document pervasive patterns of discriminatory lending and the exodus of banks from low-income neighborhoods. In 1988, for example, the Atlanta Journal-Constitution reported on widespread redlining in that city in “The Color of Money,” a series of articles that received extensive national attention. 185

In 1992, the Boston Federal Reserve conducted a rigorous study that concluded that race had a strong influence in lending decisions. 186 The study received broad media coverage, confronting banks with discrimination allegations from a particularly authoritative source.

As they responded to a wave of requests for bank mergers in the late 1980s and 1990s, federal regulators also employed mortgage lending data in deciding whether to grant approvals. The banking industry was shaken in 1989 when the Federal Reserve Bank first exercised this power by denying a merger request from Continental Illinois National Bank and Trust Company of Chicago on the ground that the bank had not met its community reinvestment requirements. Advocacy g roups that tra cked the perfo rma nce of particular banks often petitioned regulators to turn down merger requests if their performance indicated unfair lending practices.

This shift in the competitive environment led many more banks to improve lending practices in the 1990s. 187 The competitive shift resulted in part from mortgage lending disclosure and the requirements of the Community Reinvestment Act, as well as from the proliferation of sophisticated community organizations that had developed the expertise to understand bank lending patterns and negotiate with financial institutions. More banks developed products, divisions, and methods to compete in low-income markets, and bankers acknowledged that disclosure and community reinvestment requirements had proven less burdensome than expected. 188 The accuracy and scope of disclosed lending data also continued to improve. Disclosure became more frequent, data quality increased, more financial institutions were required to report, and data were collected and distributed electronically. 189

After the successes of the 1990s, community organizations and regulators turned their attention to predatory lending, a practice in which vulnerable minorities were offered higher-interest mortgages and less-favorable terms than other borrowers. 190 In 2002, mortgage lending disclosure rules were amended to require banks to disclose not only the disposition of loan applications but also mortgage prices. Beginning in 2004, lenders were required to report data on loan pricing for loan originations in which the annual percentage rate exceeded the yield of comparable Treasury securities by a specified amount. These new data allowed intermediaries such as the National Community Reinvestment Coalition and the Association of Community Organizations for Reform Now to document disparities in access to credit and press for measures to address predatory lending. 191 Regulators used the expanded information to enforce fair lending laws. In 2005, the Federal Reserve incorporated these new data into their statistical strategies for identifying potentially discriminatory institutions that warranted closer regulatory scrutiny. 192

Disclosing Plant Closings and Layoffs to Reduce Community Disruptions
Concerned over the economic impacts of intensifying global competition in the manufacturing sector and facing political fallout from a growing number of high-profile plant closings and mass layoffs, Congress debated a variety of proposals in the late 1970s and early 1980s. Policy options ranged from restrictions on employer rights to close major facilities to industry-based policies to improve competitiveness and major modifications of the unemployment insurance system. 193

In 1988, political compromise led to a more modest targeted transparency approach: the Worker Adjustment and Retraining Notification Act (WARN). 194 This law sought to protect affected parties from the effects of major employment loss by requiring covered employers to provide advance notice of plant closings or large-scale layoffs to affected workers and local communities. 195 The aim of the new disclosure requirement was to improve post-layoff and plant closing outcomes for displaced workers as well as to provide communities facing significant economic impacts with time to find alternative solutions or make adjustments for the impending closings.

Even this modest, disclosure-based response to economic restructuring involved significant political compromises. Opponents of advance notice argued that it would restrict the capital mobility that was increasingly important given international competition from countries like Japan and South Korea. In so doing, it would further widen labor productivity gaps with the rest of the world, making U.S . companies less competitive. Further, critics of advance notice argued that it would lead customers, suppliers, and capital markets to overreact, making already weakened companies less able to recover and expand. If advance notice was to be required, they argued, it should be provided a relatively short time before plant closing. It should also exempt wide classes of employers whose decisions to reduce employment reflected the normal ebb and flow of production, rather than more profound, long-term reductions in employment. 196

The resulting disclosure requirements reflected these concerns. Covered employers were required to provide affected employees with only sixty days notice of a closing. Although virtually all workers at covered employers – hourly, salaried, and managerial workers – were entitled to notice, employer coverage was quite restricted. Private and not-for-profit employers were covered if they had one hundred or more workers, but employees were excluded from that count if they had worked for fewer than six months in the past year or fewer than twenty hours per week on average. That meant that a large number of small businesses were not required to provide advance notice of layoffs or closings.

The definition of plant closing and mass layoff also left many potential company decisions involving large employment cuts outside the targeted transparency system’s disclosure requirement. A covered employer was required to provide advance notice if an impending shutdown would lead to a loss of 50 or more workers in a thirty-day period. Mass layoff was defined nar rowly as reducing employ ment at any site of 500 or more workers or laying off 50–499 workers if that number represented at least a third of the workforce. 197 In addition, covered employers were not required to provide advance notice for a variety of “unforeseeable” business reasons, for natural disasters, or where it could be shown that even the sixty-day disclosure would cause irreparable harm to the business’s viability. 198

The law did not provide an extensive apparatus for implementation. Unlike most federal workplace policies, the advance notice requirement did not vest a particular division of the U.S . Department of Labor with authority to investigate or enforce the law. Enforcement was provided instead through lawsuits lodged in federal courts by workers, their representatives (if any), and/or local governments. An employer found in violation of the disclosure requirement could be required to pay the affected workers back pay and benefits for the period when notice was not provided (up to sixty days). Employers were also subject to civil penalties of up to five hundred dollars for each day of violation. Companies were left with considerable discretion concerning the means by which they would notify workers and communities. The law did not provide a notification format or indicate through whom (union officers or other representatives) workers would be contacted or the “local community” informed. 199

The combination of restrictive employer coverage and the rather narrow definition of plant closings and mass layoffs has meant that a relatively small percentage of layoffs has been covered by the disclosure policy’s requirements. In an early study of the requirement’s impact, Ehrenburg and Jakubson concluded that although compliance with the policy was high, “WARN does not affect a substantial proportion of permanently laid off workers.” 200 That conclusion was still valid in 2006, given the large percentage of the workforce employed in workplaces with fewer than a hundred workers and the fact that the vast majority of employment reductions (even in large workplaces) do not fall within the narrow definitions of employment loss described in the regulation. 201

Disclosure provisions for plant closings and layoffs have not changed substantially since their initial approval, although several recent events have led Congress to consider expanding or modifying disclosure. Following the terrorist attacks on the United States in 2001, Congress held hearings about the significant employment dislocations associated with those attacks – particularly in the hotel and hospitality industries – which led some in Congress to call for expanding the reach of the disclosure policy. 202 In 2004, high-profile instances of “offshoring” work to India and China led the Senate to consider expanding the transparency rules’ definition of employment loss. 203 As of 2006, however, neither of these efforts had led to changes in the law. Disclosing School Performance to Improve Public Education
Many states enacted school report card requirements in the mid-1980s as concern about the inadequacies of public education mounted. 204 In 1983 A Nation at Risk, a report commissioned by President Ronald Reagan’s secretary of education, Ter rell Bell, warned that American public education often was mediocre compared to that of other countries. 205 In a study of students’ performance in twenty-two countries, U.S . students placed twelfth. SAT scores, too, had declined in the 1960s and early 1970s. Press coverage of discipline and drug problems also suggested the need for better school accountability. Education was the largest single item in most state budgets, and candidates featured education issues prominently in state election campaigns in the 1980s.

State and local officials saw school report cards as one way to provide parents with greater choice and to put pressure on school administrators to improve performance. Report cards could work in tandem with other novel approaches that states were experimenting with – vouchers to pay for private schools, charter schools, and performance contracting, a form of financing that allowed schools to design educational programs and secure resources in exchange for agreements to achieve certain performance outcomes. Report cards could reward schools for meeting their performance targets. 206

In an effort to spread the innovative practices of a few states, Congress required in 1994 that all states establish school performance standards and test students to assess whether they met these standards. 207 Congress also required educational agencies receiving funding under Title I of the Elementary and Secondary Education Act to “publicize and disseminate to teachers and other staff, parents, students, and the community” the results of annual performance reviews. 208

The content, presentation, and means of disseminating information in school report cards continued to vary widely from state to state, however. According to a national study by Education Week in 1999, only thirty-six states published regular report cards on individual schools. 209 Most presented information on schools’ past test scores and on state averages. Reporting on other aspects of performance – school safety, class size, and faculty qualifications – was less common. Only a quarter of the states with report cards presented information that allowed comparisons among test scores of schools with similar student demographics. Some states distributed school report cards to students, while most made them available on the Internet.

One major problem was the lack of consensus about the kinds of data that school report cards should contain to measure performance. Surveys conducted in 1998 found that parents and educators sometimes had quite different views regarding important content, and that existing school report cards did not alw ays co ntain information that both regarded as very important. Educators were more likely to want demographic and disaggregated data, while some parents were concerned that such data would be divisive. Only about a third of those polled thought that schools should be judged principally on student achievement on standardized tests. Most regarded indicators of teacher quality and school climate as among the critical data to include. 210

In addition, some early research suggested that surprisingly few parents and educators made use of report card information. Research by Public Agenda conducted in 1999 found that only 52 percent of teachers and 31 percent of parents had seen a school report card. 211

In 2001, the George W. Bush administration championed the No Child Left Behind Act as a centerpiece of public eduction reform. Among other provisions, the law required school districts that received federal assistance for disadvantaged students under Title I of the Elementary and Secondary Education Act to publish report cards for each of its schools. 212

The new federal requirements demanded disclosure of more information than was commonly published by districts at that time. School report cards had to disclose students’ achievement on state tests and disaggregate test scores by race, disability status, and English proficiency. They also had to disclose teacher qualifications and show trends in achievement, dropout rates, graduation rates, and percentages of students not tested. 213

The quality of school report cards has increased substantially since the enactment of the No Child Left Behind law, although report cards still fall short of full compliance. 214 By 2004, all fifty states provided scho ol report ca rds and forty-four states disaggregated student achievement data by race and disability as required by 2001 law. However, only fourteen states disaggregated graduation data and provided information regarding the number and percentage of “highly qualified” teachers as required by the law.

At the same time, multiple federal and state reporting requirements created confusion. In 2004, nineteen states had more than one report card per school and sixteen states created special report cards to comply with the requirements of the No Child Left Behind law. 215

One careful study of state-level student performance in 2004 found that the incentives and sanctions associated with accountability systems in education reform had a significant and positive impact on test scores but that school report cards alone had no independent statistically significant effect. 216 In 2006, it was still too early to determine whether school report cards would improve over time and whether they would create incentives for better public education.

TARGETED TRANSPARENCY IN THE INTERNATIONAL CONTEXT

Harmonizing Disclosure of Corporate Finances to Reduce Risks to Investors
International rules for corporate financial disclosure evolved slowly in the 1990s as rapid integration of securities markets made compliance with widely varying national rules both costly and confusing for companies and regulators. By 2006, a limited effort by a small group of international accountants to write disclosure rules for companies that sold stock in more than one country had become an unusual instrument of international governance. No treaty or international agreement provided a framework for financial disclosure rules. Instead, private efforts became public law by means of a slow process of government endorsement.

An important date was January 1, 2005, when the European Union (EU) required more than seven thousand public companies headquartered in its twenty-five member countries to foll ow the fina ncial disclos ure rules established by the private International Accounting Standards Board (IASB). 217 Officials of the Bush administration announced that the United States, too, might hand over to the board as early as 2007 financial reporting rule making for foreign listings. 218 Russia, South Africa, Australia, Taiwan, Hong Kong, and India also had plans to adopt the rules made by the international board.

However, the seemingly technical task of harmonizing accounting standards produced difficult political issues from the start, because what financial information was disclosed and how it was disclosed could change markets. Reporting requirements could alter the projects firms chose to undertake, how they compensated employees, how well firms fared against competitors, and how effectively they attracted investors. Traditionally, national financial disclosure rules varied so widely that a substantial profit under one country’s rules could be a substantial loss under another’s.

International standards developed gradually over a generation. In 1973, a committee of private-sector accountants from nine countries formed the International Accounting Standards Committee and began issuing proposed international accounting standards. The committee, one of several competing efforts in the 1970s and 1980s, initially skirted thorny political issues by proposing standards that left companies and national regulators wide latitude in interpretation. 219

In the 1990s and early 2000s, rapidly integrating markets and international financial crises increased companies’, stock exchanges’, and national regulators’ interest in more rigorous international disclosure rules. The Asian financial crisis of the mid-1990s created calls for greater corporate transparency, even though corporate reporting flaws were not among its main causes. Accounting scandals in the United States and Europe in 2001–2004 alerted international investors to hidden risks and highlighted major weaknesses in national disclosure rules.

Company executives, stock exchange managers, accountants, investors, and other market participants each had somewhat different reasons for supporting harmonization of corporate financial reporting. Multinational companies, seeking to diversify their shareholder base and lower their cost of capital by listing on stock exchanges outside their home countries, found duplicate reporting not only burdensome but also sometimes embarrassing. Managers of large stock exchanges, seeking to gain listings from foreign companies, found their national reporting rules created a competitive disadvantage. The accounting profession, dominated by five international firms through most of the 1990s, feared that conflicting statements of profits and losses under different national rules could impair accountants’ credibility. Investors, seeking higher returns in foreign markets, found variable results a new source of uncertainty.

In order to gain public legitimacy, the harmonization effort started by a small committee of accountants – the IASB – reformed its structure and improved procedural fairness in 2000 and 2001. The board’s new structure emphasized expertise rather than national representation, paralleled that of the U.S . and British accounting standard setters, and was dominated by members from countries with Anglo-American accounting traditions. 220 The reformed board consisted of twelve full-time and two part-time members who served a maximum of two five-year terms and were appointed for their technical expertise as auditors, preparers, and users of financial statements. To coordinate the board’s rule making with that of national standard setters, seven board members were given formal liaison responsibilities with specific countries, the United States, Britain, France, Germany, Japan, Canada, and Australia, giving those countries an elite status. The board also drew on the expertise of a geographically diverse advisory council and interpretations committee. By early 2005, the board had issued forty-one accounting standards, including controversial requirements for expensing of stock options and accounting for derivatives. 221

The board aimed to produce international standards “under principles of transparency, open meetings, and full due process.” 222 Board meetings were open to the public. Agendas of board and committee meetings were posted in advance on the board’s Web site, and summaries ofdecisions were posted afterward. Draft standards and interpretations were subject to public notice and comment (usually 120 days for standards and 60 days for interpretations), and sometimes to public hearings. The publication of final standards included a discussion of their rationale, responses to comments, and the board’s dissenting opinions. The board also published an annual report. The board and affiliated organizations, headquartered in London, employed about sixty people, including board members, and had an annual budget of about $18 million, provided through contributions from accounting firms (including $1 million from each of the four largest international firms), corporations, central banks, and international organizations. 223

As in the United States and Britain, a self-perpetuating oversight group, the International Accounting Standards Committee Foundation (IASCF), was intended to provide a buffer from political pressures and assure efficient operation. Its trustees chose board members, appointed the board chair, raised operating funds, and reviewed the board’s constitution and procedures every five years. Its constitution provided that its twenty-two-member self-perpetuating “financially knowledgeable” board of trustees be “representative of the world’s capital markets and a diversity of geographical and professional backgrounds.” It called for six representatives from North America, six from Europe, four from the Asia/Pacific region, and others without geographical designation. 224 The foundation’s first chair was Paul Volcker, former head of the United States’ Federal Reserve Board.

Informal public and private networks also supported the board’s work. The EU encouraged the creation of a private-sector technical group (the European Financial Reporting Advisory Group, EFRAG) and formed the Committee of European Securities Regulators (CSER), which quickly established guidelines for member states’ enforcement bodies, including independence and authority to monitor and correct accounts. To reduce the chances that each nation would in effect create its own sta ndards through different interpretations, CESR also established a database of nations’ enforcement decisions and urged national regulators to follow precedents as they were established. 225 The International Federation of Accountants (IFAC) proposed a peer review system for periodically and randomly reviewing the accounts of multinational companies and issued a new standardized audit report form to improve the comparability of accounts. 226 In May of 2004 the SEC and CESR announced that they were increasing their collaborative efforts in order to improve communication about regulatory risks between Europe and the United States and to promote convergence in future securities regulation. 227

Enforcement of accounting standards, however, was left to national regulators. The board remained a private membership organization with no authority to compel nations or companies to adopt its disclosure rules. The public character of its authority rested solely on the endorsement of its processes and standards first and foremost by national governments and then by complex networks of national politicians, regulators, accounting firms, stock exchanges, companies, investors, and other market participants. Enforcement practices varied widely among nations that represented major markets. 228

In 2006, the development of international corporate financial a ccounting standards appeared to be sustainable. Standards had improved markedly over time in scope, accuracy, and use. However, it was not yet clear what degree of harmonization the international board would achieve, whether a critical mass of nations and companies would continue to support the bo ard’s effo rts, a nd how well standards would be enforced by national regulators. Standards for financial derivatives, stock options, and other complex instruments remained controversial. Nations’ capacities to administer and enforce international disclosure rules varied widely, raising the possibility that standards would be accepted on paper but ignored in practice. EU companies complained that standards were costly and confusing: “The standards have been criticized by businesses of all sizes for making accounts unreadable and irrelevant,” the Financial Times reported in March 2006. 229 In addition, the bo ard’s funding remained uncertain. The “big four” accounting firms continued to provide a third of funding, raising charges of undue influence, while other contributions were ad hoc.

Political realities suggested that gradual partial harmonization of standards and practices over a period of years was as much as could be expected. Whether such harmonization would reduce or increase hidden risks to investors remained to be seen.

Disclosing International Infectious Disease Outbreaks to Protect Public Health
From the mid-nineteenth century on, nations sought to create international practices to control the spread of infectious disease. International surveillance – the rapid reporting of disease outbreaks – was early recognized as a key to preventing deaths and illnesses. After several devastating cholera epidemics in the early 1800s, many nations negotiated international sanitary conventions that sought to harmonize variable national surveillance and quarantine laws.

Since 1951, the International Health Regulations of the World Health Organization (WHO) have governed international surveillance of infectious diseases among member countries. An arm of the United Nations, the WHO is governed by a World Health Assembly composed of representatives of the WHO member governments. International Health Regulations require member governments to inform the WHO about cases of specified infectious diseases within set time periods. Traditionally, national governments have controlled the flow of information on which disease surveillance is based. Regulations also specify public health activities at ports and airports and set procedures for trade and travel restrictions, including limits on those restrictions. Their stated purpose is to minimize the international spread of disease with minimal interference with trade and travel. 230

By the 1970s, however, the WHO surveillance system was moribund. Only plague, cholera, and yellow fever were subject to international reporting rules and member states routinely violated even those reporting obligations. In practice, member governments’ incentives to protect national reputation and economic stability often outweighed incentives to join in international efforts to report disease outbreaks. At the same time, vaccines and antibiotics minimized some common infectious diseases in the United States and Europe, easing political pressure for effective surveillance. 231

But in the 1980s, the AIDS epidemic as well as the spread of other infectious diseases highlighted the failure of existing international regulations and reawakened international interest in more effective surveillance. In the United States, the national Institute of Medicine identified fifty-four infectious diseases that were on the rise owing to a combination of increased travel and trade, germs’ adaptability, and a lack of public health measures. 232

In 1995, the World Health Assembly directed the World Health Organization to revise the failed government-centered surveillance rules. But reaching agreement on new surveillance rules proved to be a slow process. New International Health Regulations were not adopted until 2005. 233 Meanwhile, the WHO cooperated with private groups to create informal networks to share information. The Global Outbreak Alert and Response Network was designed to pool public and private information for response to international outbreaks. In 2001, the four-year-old network was officially endorsed by the World Health Assembly. 234

However, it was the rapid spread of SARS (Severe Acute Respiratory Syndrome) in 2002 and 2003 that sparked the revival of the international system of infectious disease reporting. The disease first appeared in China’s Guangdong Province in November 2002, spread to thirty countries in six months, and killed more than seven hundred people. Public fears fed by a paucity of reliable information contributed to large economic costs – estimated at $40 billion. 235

Significantly, initial information about the SARS outbreak did not come from government reports. It came from millions of cell phone and Internet messages in Guangdong Province and elsewhere in late 2002, as well as from information provided by private reporting systems such as ProMED-mail. It was these on-the-ground reports from ordinary citizens and local health workers that spurred the WHO to make inquiries of the Chinese government, which, in turn, led the Chinese government to acknowledge the outbreak and led the WHO to issue a global alert on March 12 and a travel advisory on March 15, 2003. 236

The new capabilities of information technology not only marshaled far-flung resources to identify the source and character of the disease but also helped to combine the scientific expertise of many nations to bring the epidemic under control. Public health authorities in many countries cobbled together informal networks to respond with unprecedented speed. The WHO coordinated sixty teams of medical personnel to help control the disease in affected areas and a network of eleven infectious disease laboratories in nine countries, linked through a secure Web site and daily conference calls, to work on the disease’s causes and diagnosis. These networks made new scientific information available to researchers around the world and hastened collaborative progress on diagnosis and treatment. Researchers were able to identify the cause of SARS within a month. By July 2003, the five-month epidemic had ended. 237

In retrospect, it was clear that the SARS epidemic coupled with advances in communication technology signaled the end of government control of the flow of information about disease outbreaks. Even in the absence of an international legal obligation, China was pressured into reporting the spread of SARS by masses of local data provided by villagers and aggregated by private electronic surveillance systems. In May 2003 the World Health Assembly acknowledged the legitimacy of the crisis-driven de facto changes in the international reporting system. In an important change, the assembly asked the WHO to continue using nongovernmental sources of information for sur veillance. The WHO concluded that the SARS crisis demonstrated that government attempts to hide information carried a very high price – “loss of credibility in the eyes of the international community, escalating negative domestic economic impact, damage to health and economics of neighboring countries, and a very real risk that outbreaks within the country’s own territory can spiral out of control” 238

Labeling Genetically Modified Foods to Protect Health and the Environment
Controversies concerning the safety and environmental effects of genetically modified food crops created extraordinary political conflict and market disruptions in the United States, Europe, and developing countries during the 1990s and early 2000s. Early genetic modification of crops, introduced commercially in the mid-1990s, created corn, soy, and other grains, fruits, and vegetables that were resistant to pests or pesticides or enhanced to produce extra vitamins, proteins, or other nutrients. Genetic modification differed from conventional crossbreeding by altering plants at the molecular level, sometimes by combining the DNA of different species. In the pipeline were bioengineered plants that promised drought resistance or immunity to or treatments for specific diseases. However, new benefits were accompanied by questions concerning the possible introduction of allergens when DNA from different species was combined; the long-term environmental effects of pest-resistant crops on beneficial insects, birds, and animals; and the possible creation of “super weeds” or other pesticide-resistant plants or insects from inadvertent crossbreeding between conventional and bioengineered plants. 239

The EU and the United States took different approaches to the introduction of genetically modified food crops in the mid-1990s. The EU regulated genetically modified crops as a novel health and environmental issue, requiring thorough review and risk assessment for each field trial and product introduction. 240 The United States regulated genetically modified crops as a variation on familiar health and safety concerns, allowing many field trials and introductions to take place without government permits. 241

After an informal six-year ban on imports of genetically modified crops, Europe adopted a mandatory labeling regime in 2004. 242 After welcoming genetically modified crops, the United States adopted guidelines for voluntary labeling. 243 As of 2005, however, labeling had not improved the efficiency of international markets or public safety, and both its effectiveness and its sustainability were in doubt.

The European public responded to the sudden introduction of genetically modified foods by the American Monsanto Corporation in 1996 and 1997 with demonstrations and boycotts. Inflammatory headlines warned of the dangers of “frankenfoods”; Green Party representatives cautioned about environmental risks; respected consumer organizations called for product labeling or withdrawal; and Prince Charles, Paul McCartney, and other well-known figures echoed public skepticism about the safety of such foods. Already frightened by risks associated with mad cow disease (risks that initially were downplayed by public officials), an incident of dioxin-contaminated Belgian food, and the spread of hoof-and-mouth disease (none of which had anything to do with genetic modification), European consumers were distrustful of government and commercial assurances of food safety.

In contrast, the American public barely noticed the introduction of genetically modified foods. Antiregulatory sentiment ran high in the United States in the mid-1990s, following gains by conservatives in the midterm elections of 1994. Experts in government and the private sector debated safeguards and determined that no new regulatory system was needed for genetically modified foods. Risks could be considered product by product – just like risks associated with other advancing food technologies. Interestingly, the U.S . food industry favored a mandatory safety assessment for genetically modified foods, although the industry opposed mandatory labeling. 244

In 1998, European Union member states instituted an informal ban on the import of bulk shipments of products that might contain genetically modified organisms, stopped approving genetically modified foods, and required labels on packaged foods already on the market that contained genetically modified corn or soy. In the United States, farmers rapidly increased production of genetically modified crops so that nearly 40 percent of corn acreage and more than 70 percent of soybean acreage was planted with crops engineered to increase resistance to pests or herbicides. Planting such genetically modified seeds had benefits for farmers. It could reduce significantly costs associated with plowing and purchase of pesticides.

In the late 1990s, however, European protests spread to the United States and other countries. In 1999, protests by a variety of activist organizations led national farm associations in the United States to warn their members about the economic risks of planting genetically modified crops. Companies such as Frito-Lay and Nestle banned such crops from their products in the United States as well as in Europe. Gerber and H. J. Heinz removed genetically modified ingredients from baby food. Domestic incidents also triggered alarm. When Starlink, a variety of genetically modified corn approved only for animal feed in the United States, was found in taco shells in fast-food restaurants in 2002, it raised the specter of possible allergens. After ten years of commercialization, virtually all the production of genetically modified crops remained concentrated in only four countries – the United States, Canada, Argentina, and Brazil.

International disagreement took the highest toll in Africa. Zambia, Zimbabwe, Mozambique, and Malawi rejected U.S . food aid in 2002 because shipments contained genetically modified corn, even though those countries were threatened with famine conditions and genetically modified corn had been distributed without controversy in Zambia for six years. African nations could not risk losing the European market for their crops if the seed found its way into farmers’ fields. The United States remained the world’s largest exporter of agricultural products. But Europe, one of the world’s two largest importers (along with Japan), had more influence over market rules.

Scientific uncertainty continued to leave room for polarized debate. In the United States, the National Research Council remained supportive of the benefits of genetic engineering of crops but also emphasized the importance of assessing each product individually for potential risks from allergens, contamination of other plants, or damage to insects or animals. The Research Directorate General of the EU, as well as French and British authorities, acknowledged that no human health or environmental problems had yet been observed but also cautioned about long-term risks. All agreed that there was a great deal that was not yet known about the effects of genetic modification of foods.

Labeling of genetically modified foods was not an unreasonable approach to promoting more efficient markets, improving consumer choice, and creating incentives for minimizing the risks of genetic modification – goals that Europe, the United States, and developing countries shared. In the past, governments had often employed food labeling to promote public health and inform consumer choice when individual preferences differed. Europe and the United States already specified the labeling of ingredients, allergens, and nutrients in packaged foods.

In 2004, the EU did replace its informal moratorium with an exacting system of labeling and tracking genetically modified foods and animal feed. Some allowance was made for accidental contamination on the grounds that some mixing of crops was inevitable. Foods that contained less than 0.9 percent of genetically modified substances did not have to be labeled. 245 In order to implement the labeling regime, the EU required that the characteristics, shipping, and sale of genetically modified food ingredients be tracked from planting to incorporation in products. Tracking was essential in order to verify labeling and facilitate rec all s. Genetically modified seeds also had to be labeled and tracked. In effect, genetically modified crops had to be segregated at each step of production and distribution – from farm to fork. The European Commission approved one variety of Bt corn for human consumption (but not planting) in May 2004, the first biotech product to gain approval since 1998. The commission also approved a variety of genetically modified maize in 2006.

After the Starlink contamination incident in 2002, the United States also proposed voluntary guidelines for companies to use if they wanted to inform consumers that their products did or did not contain genetically modified ingredients. The FDA recommended that labels feature statements that products were (or were not) genetically engineered or were made (or not made) using biotechnology, rather than statements that products were “GMO free,” since some degree of contamination seemed unavoidable. 246 In an unrelated regulatory change, the United States also introduced rules to standardize labeling of “organic” foods, a growing portion of the U.S . food market. Those rules included a requirement that foods labeled organic could not contain genetically modified ingredients. 247

As of 2006, however, the labeling of genetically modified foods appeared unlikely to prove sustainable or effective as a public health measure or as a means of increasing market efficiency by informing consumer choice, for two reasons. First, frequent incidents of contamination between genetically modified and conventional crops , as wel l as acknowledgement that some contamination was inevitable, raised doubts about whether accurate labeling was technically feasible. Second, the underlying complexity and uncertainty of safety and environmental issues concerning genetic modification made it difficult to communicate accurately with consumers by means of labels. “GMOs fall into the class of risk situations characterized by both low certainty and low consensus,” David Winickoff and his coauthors suggested in an analysis of these food wars. 248 In such situations, labels that warn but do not inform tend to inflame public fears rather than improve public knowledge.

Labeling of genetically modified foods by the European Union also had extreme unintended consequences. In effect, it continued to preclude farmers in developing countries from planting genetically modified crops. Seemingly simple labeling required farmers, distributors, and food companies to segregate genetically modified crops at every step. Farmers, grain elevators, railroad cars, processing facilities, and food manufacturing plants needed separate facilities and processes for conventional and genetically modified fruits, vegetables, and grains. In the United States, officials estimated that crop segregation and tracking requirements might increase food production costs by 10 to 30 percent. 249

In the absence of any more appropriate international forum, the continuing battle over the labeling of genetically modified foods took the form of a trade dispute, with the World Trade Organization (WTO) acting as arbiter. In February 2006 the WTO ruled that the EU’s informal ban against imports of genetically modified foods represented an unlawful restraint of trade (although the EU had by then technically lifted the ban). 250 EU officials countered that the WTO ruling would not influence their policies.

  1. The Securities Act is codified at 15 U.S.C. §§78a et seq. For a detailed account of these events, see Seligman, 1995, pp. 41–42.
  2. Quoted in Seligman, 1995, p. 71.
  3. We discuss this evolution in detail in Chapter 5.
  4. Seligman, 1995, pp. 431–437.
  5. FASB was governed and financed by the new Financial Accounting Foundation, a non-profit organization whose trustees were nominated by five leading accounting organizations (though still elected by the board of the Association of International Certified Public Accountants, AICPA). Task forces drawn from a spectrum of interested groups as well as a broad-based advisory council gave FASB broader accountability. Unlike the previous board, its seven members held full-time positions and did not have other business affiliations. Soon after the board began operation, the SEC issued a policy statement recognizing its opinions as authoritative. Pacter, 1985, pp. 6–10. See also Seligman, 1995, pp. 452–466 and 554.
  6. Pacter, 1985, pp. 10–18; Seligman, 1995, pp. 555–557.
  7. One response was the Securities Investor Protection Act of 1970. It produced new SEC disclosure rules that required broker-dealers to give notice when new capital was insufficient or records were not current. Seligman, 1995, pp. 451–465.
  8. The scandal led to the 1977 Corrupt Practices Act, which required companies to maintain new accounting controls to assure that transactions were authorized by management. This additional transparency was designed to discourage illegal transfers. Seligman, 1995, pp. 539–549.
  9. Seligman, 1995, pp. 549–550.
  10. See http://www.sec.gov/pdf/handbook.pdf. Commission chairman Arthur Levitt emphasized the importance of constant vigilance to produce clear and accurate information. Floyd Norris, "Levitt to Leave SEC Early; Bush to Pick 4," New York Times, December 21, 2000, p. C1. See also Plain English Disclosure, 63 Fed. Reg. 6370, 6370 (February 6, 1998) (to be codified at 17 C.F.R. pts. 228, 229, 230, 239, and 274 (Release Nos. 33–7497; 34–39593; IC-23011; International Series No. 1113; File No. S7–3 –97)).
  11. Smith and Emshwiller, 2003, pp. 374–376.
  12. General Accounting Office, 2002, p. 15.
  13. Sarbanes-Oxley Act of 2002, Pub. L. 107–204, Title IV, §404, July 30, 2002, 116 Stat. 745 (codified at 15 U.S.C.A. §7201 et seq. (West 2005) and scattered sections of 18 U.S.C.). Section 404 is codified in 15 U.S.C.A. §7262 (West 2005). For a look at how the Sarbanes-Oxley Act has amended various sections of the Securities Exchange Act of 1933, see http://www.sec.gov/divisions/corpfin/33act/index1933.shtml (site accessed June 4, 2006).
  14. See, for example, Executive Compensation, 17 C.F.R. §§228.402, 229.402 (2005).
  15. General Accounting Office, 2003a, p. 13.
  16. General Accounting Office, 2003a, pp. 16–17.
  17. Financial Literacy and Education Improvement Act, 20 U.S.C. 9701–08.
  18. Fagotto and Fung, 2003, p. 63.
  19. For discussions of the development of right-to-know laws and regulations in connection with health and safety risks, see Baram, 1984; Ashford and Caldart, 1985; Hadden, 1989.
  20. Schroeder and Shapiro, 1984.
  21. Oleinick, Fodor, and Susselman, 1988, provide a timeline for the adoption of state right-to-know laws. By 1982 five states had worker right-to-know laws; the number increased by six new states in 1983 and eight in 1984; in 1985, twenty-seven states had worker right-to-know laws.
  22. Hunter and Mason, 1996.
  23. See Executive Summary of Hazard Communication in the 21st Century Workplace, http://www.osha.gov/dsg/hazcom/finalmsdsreport.html (site accessed June 10, 2006). The Hazard Communication final rule can be found at 48 Fed. Reg. 53280 (November 25, 1983) (codified at 29 C.F.R . §1910.1200 (2005)).
  24. Occupational Safety and Health Administration, 2004.
  25. Stillman and Wheeler, 1987.
  26. General Accounting Office, 1992a.
  27. Baram, 1996. According to the author, liability and market forces promote compliance with the hazard communication standard.
  28. Tom Anschutz, "When OSHA Comes Calling," Occupational Hazards, March 2006, pp. 50–51.
  29. See Occupational Safety and Health Administration, 1997.
  30. Kolp et al., 1993.
  31. Phillips et al., 1999.
  32. Fagotto and Fung, 2003; Weil, 2005.
  33. Occupational Safety and Health Administration, 2004.
  34. The disclosure system was authorized by the Emergency Planning and Community Right-to-Know Act of 1986, 42 U.S.C. 11023(a). This account draws on several detailed analyses of the Toxics Release Inventory, including Fung and O'Rourke, 2000; Case, 2001; Cohen, 2001; Graham and Miller, 2001; Karkkainen, 2001; Pedersen, 2001; Graham, 2002a; Hamilton, 2005.
  35. Graham, 2002a, pp. 46–47.
  36. Exec. Order 12,856, 3 C.F.R. 616 (1993); Exec. Order 12,969, 60 Fed. Reg. 40989 (August 8, 1995), revoked by Exec. Order 13,148, 65 Fed. Reg. 24595 (April 21, 2000) (set out as a note in 42 U.S.C. §4321 (2000)).
  37. Toxics Release Inventory Burden Reduction, 70 Fed. Reg. 57822 (proposed October 4, 2005) (to be codified at 40 C.F.R. pt. 372).
  38. In the late 1990s, the federal EPA did make available Risk-Screening Environmental Indicators software that allowed users to analyze risk in general terms using disclosed toxic chemical data, http://www.epa.gov/opptintr/rsei/index.html.
  39. This account is drawn from a longer case study by Mary Graham: Graham, 2002a. For a summary of structural problems, see Graham, 2002a, pp. 47 –49. For an empirical analysis of impact of disclosure, see Graham and Miller, 2005. On the issue of timeliness, see U.S. EPA, 2004 TRI Public Data Release, April 12, 2006, http://www.epa.gov/tri/tridata/tri04/index.htm.
  40. Nutrition Labeling and Education Act of 1990, Pub. L. 101–535, November 8, 1990, 104 Stat. 2353 (codified at 21 U.S.C . §343 et seq. (2000)).
  41. This discussion is drawn from a longer case study in Graham, 2002a.
  42. These provisions are set forth at 21 U.S .C . 343(q)(1) (2000). See also Statement on Signing the Nutrition Labeling and Education Act of 1990, 26 Weekly Comp. Pres. Docs 1795 (November 8, 1990).
  43. See Graham, 2002a, pp. 81–101.
  44. Food Labeling: Trans Fatty Acids in Nutrition Labeling , Nutrient Content Claims, and Health Claims, 28 C.F.R . §101.9 (2005).
  45. Food Allergen Labeling and Consumer Protection Act of 2004, Pub. L . 108–282, Title II, August 2, 2004, 118 Stat. 905 (codified at 21 U.S .C .A . §374a (West 2005)).
  46. This account is draw n from a longer case study in Graham, 2002a.
  47. The Institute of Medicine defined errors as failures of planning or execution of a medical treatment. Errors were a subset of adverse events, defined as injuries attributable to medical management rather than to a patient’s underlying condition. Errors were also referred to as preventable adverse events. Institute of Medicine, 1999, pp. 23–30.
  48. Institute of Medicine, 1999, pp. 1–3.
  49. Institute of Medicine, 1999, pp. 3–13.
  50. Institute of Medicine, 1999, pp. 3–13.
  51. 10 N.Y. Comp. R & Regs. §709.14 (2005).
  52. 28 PA. Code §136.21 (West, Westlaw through May 2006).
  53. Patient Safety and Quality Improvement Act of 2005, Pub. L. 109–41, July 29, 2005, 119 Stat. 424 (codified at 42 U.S .C .A. §299b21 et seq. (West, Westlaw through Pub. L. 109–169)).
  54. The National Academy for State Health Care Policy publishes periodic summaries of state patient safety laws and practice, http://www.nashp.org/.
  55. See, for example, Richard Perez-Pena, "Law to Rein in Hospital Errors Is Widely Abused, Audit Finds," New York Times, September 29, 2004.
  56. http://www.hospitalcompare.hhs.gov; http://www.qualitycheck.org (sites accessed May 12, 2006).
  57. The federal Megan’s Law was preceded in 1994 by the Jacob Wetterling Crimes Against Children and Sexually Violent Offender Registration Act, which required states to establish registries for sex offenders and child molesters. The Wetterling Act also mandated more stringent registration requirements for the most dangerous offenders, designated as "sexually violent predators." States that fail to comply with the Wetterling Act risk losing 10 percent of federal anticrime funding. Jacob Wetterling Crimes Against Children and Sexually Violent Offender Registration Act, Pub. L. 103–322, Title XVII, Subtitle A, §170101, September 13, 1994, 108 Stat. 2038. See also Adkins, Huff, and Stageberg, 2000, p. 1.
  58. See “Sex Offender Registration” (Westlaw 50 State Surveys: Surveys of Criminal Laws: Sex Offender Registration, 2006).
  59. Logan, 2003.
  60. In Alaska, the law had been ruled unconstitutional by the appellate courts because it punished ex post facto offenders who had been convicted before the state law was passed. Smith v. Doe, 538 U.S . 84 (2003). In the Connecticut case, one issue was whether disclosing offenders’ data without proving that they remained dangerous represented a violation of the guarantee of due process. Connecticut Dept. of Public Safety v. Doe, 538 U.S. 1 (2003).
  61. 1990 Wash. Legis. Serv., ch. 3, §117 (codified at Wash. Rev. Code Ann. §4.24.550 (2005 & West. Supp. 2006)).
  62. “The legislature ...finds that if the public is provided adequate notice and information, the community can develop constructive plans to prepare themselves and their children for the offender’s release. A sufficient time period allows comm unities to meet law enforcement to discuss and prepare for the release, to establish block watches, to obtain information about the rights and responsibilities of the community and the offender, and to provide education and counseling to their children.” 1994 Wash. Legis. Serv., ch. 129, §1 (codified at Wash. Rev. Code Ann. §4.24 .550 (2005 & West Supp. 2006)).
  63. The Washington Association of Sheriffs and Police Chiefs collects and maintains a statewide registry based on the information provided by individual county sheriff’s offices.
  64. Wash. Rev. Code Ann. §9A.44.130.
  65. Wash. Rev. Code Ann. §9A.44.130(10)(a),(b).
  66. The Web site can be found at http://ml.waspc.org.
  67. See http://ml.waspc.org/index.aspx.
  68. The organization provides a range of services including a helpline for communities on using registry information; advocacy at the local, state, and federal level; and education, counseling, and policy research. The site maintained by the organization, http://www.parentsformeganslaw.com, also provides links to all fifty state registries as well as an evaluation of the accessibility of information. It gave Washington state a grade of “C” for its registry on the basis of a nationwide review of information accessibility in 2005.
  69. Safe Drinking Water Act of 1974, Pub. L . 93 –523, July 1, 1974, c. 373, Title XIV, as added December 16, 1974, §2(a), 88 Stat. 1669 (codified at 42 U.S.C. §§300f et seq.).
  70. 42 U.S.C. §300g-2(c)(1)–(3).
  71. General Accounting Office, 1992b.
  72. See, for example, MacKenzie et al., 1994; Environmental Protection Agency, 1999.
  73. 42 U.S .C . §300g-2(c)(4). Regulations are codified at 40 C.F.R . §141.151 et seq.
  74. National Environmental Education and Training Foundation, 1999.
  75. Payment et al., 1991.
  76. Natural Resources Defense Council, 2003, Chapter 1, p. 2 .
  77. Even small amounts of lead can cause neurological problems in children and high blood pressure in adults. The EPA findings are summarized in Congressional Research Service, 2005, p. 2.
  78. Congressional Research Service, 2005, p. 5.
  79. Natural Resources Defense Council, 2003, Chapter 3.
  80. Government Accountability Office, 2004, p. 13.
  81. National Research Council, 2002.
  82. The series, by KCBS-TV newsman Joel Grover, aired November 16, 17, and 18, 1997, on the Channel 2 News in Los Angeles.
  83. Hospitalizations and fatality estimates from Mead et al., 1999. CDC estimates, based on surveillance data from 1993 to 1997, reported in Centers for Disease Control and Prevention, Surveillance for Foodborne Disease Outbreaks – United States, 1993–1997, Morbidity and Mortality Weekly Report, vol. 49 (SS-1), 2000, pp. 22–26.
  84. For a general description, see Simon et al., 2005, pp. 32–36. Los Angeles County Ordinance 97–0071 §2 (part), 1997. http://municipalcodes.lexisnexis.com/codes/lacounty/_DATA/TITLE08/Chapter_8_04_PUBLIC_HEALTH_LICENSE/_8_04_225_Grading_and_letter_gr.html (site accessed June 3, 2006); see also County of Los Angeles Department of Health Services, Public Health Programs and Services, Environmental Health, Posting Requirements Advisory Bulletin: Retail Food Establishments, http://search.ladhs.org/images/nrfood.htm (site accessed April 29, 2006).
  85. The cities that had not adopted grade cards in Los Angeles County as of 2005 were Avalon, Azusa, City of Industry, Hidden Hills, La Habra Heights, Montebello, Redondo Beach, San Marino, Sierra Madre, and Signal Hill. Restaurants in those cities were inspected and received grades from the county, but were not required to post them.
  86. The DHS provides inspectors a detailed retail food inspection guide, broken into five sections. See County of Los Angeles, Department of Health Services, Retail Food Inspection Guide, H-3046(May2000). A subjective element (based on the inspectors’ overall assessment of hygiene status) was eliminated from the survey in July 1997 to improve the objectivity of the guidelines.
  87. The guidelines define an A as “[g]enerally superior in food handling practices and overall food facility maintenance”; a B as “[g]enerally good in food handling practices and overall food facility maintenance”; and a C as “[g]enerally acceptable in food handling practices and overall general food facility maintenance."
  88. A total of 989 restaurants out of 24,000 received closure orders in Los Angeles County in 2002. Most were temporary. See Martin Miller, “Five Years into L.A. County’s Grade-Posting Project, Most Restaurants Are Getting Top Marks,” Los Angeles Times, July 28, 2003.
  89. The ordinance specifically requires that the grade card be posted within five feet of the point of entry. If the numeric grade is below a C, the restaurant is required to post the numeric grade in its window.
  90. See Jin and Leslie, 2005, for a summary of these results. Jin and Leslie find that these changes arise from a combination of “sorting” (customers switching from restaurants with low grades to those with higher grades) and improvement in the hygiene practices of restaurants with lower ratings. See Jin and Leslie, 2003 and 2005.
  91. See Jin and Leslie, 2006.
  92. Along with anecdotal evidence, Jin and Leslie, 2005, p. 100, report that the distribution of grades around the critical scores of 89 (the line between an A and B) and 79 (between a B and C) show a dramatic upward spike around the higher number, implying that inspectors may choose to bump up scores. If such activity occurs only at break points, this may imply only a mild form of grade inflation.
  93. Miller, “Five Years into L.A . County’s Grade-Posting Project, Most Restaurants Are Getting Top Marks.”
  94. David Pierson, “Where ‘A’ is Not on the Menu: Chinese Eateries in an L.A. County Enclave Struggle with Hygiene Ratings,” Los Angeles Times, September 28, 2005.
  95. An effort to replicate the Los Angeles County system in San Francisco faced fierce opposition when it was proposed in 2004. After a six-month battle, the San Francisco Board of Supervisors adopted a compromise measure requiring restaurants to post health inspection reports (but not summary grades), as well as merit symbols for those receiving high marks. See Suzanne Herel, “Health Ratings Win Approval,” San Francisco Chronicle, May 12, 2004, p.B4. Efforts to adopt a similar system in San Bernardino County have also faced opposition from restaurant owners and restaurant associations. See Martin Hugo, “San Bernardino County Considers Grading Restaurants,” Los Angeles Times, April 20, 2004, p. B5; see also Martin Hugo, “S.B. County Restaurants May Soon Get Health Ratings, ” Los Angeles Times, April 28, 2004, p.B3.
  96. Based on a survey by the National Conference of State Legislators in 2005. North Carolina's system is called the "Know the Score" program and uses a grading system similar to the one employed in Los Angeles. See N.C . Gen. Stat. §130A-249 (2005). Tennessee’s system also uses grade cards. See Tenn. Code Ann. §68–14 –317 (2001). See also Pytka, 2005.
  97. These accidents and their causes were extensively reported on by Keith Bradsher of the New York Times in 2000.
  98. Government Accountability Office, 2005b, p. 31.
  99. Federal regulators first proposed a rollover standard in 1973. For a detailed history of rollover regulation, see National Academies, 2002, pp. 9–13.
  100. These goals are spelled out in Consumer Information Regulations; Rollover Prevention, 65 Fed. Reg. 34998 –35024 (June 1, 2000) (codified at 49 C.F.R . pt. 575). See also Transportation Recall Enhancement, Accountability, and Documentation (TREAD) Act, Pub. L. 106 –414, November 1, 2000, 114 Stat. 1800 (codified at 49 U.S .C . §30170) (2000)). In earlier rule makings, regulators established frontal crashworthiness ratings and side impact ratings for each new model.
  101. For a detailed discussion of the development of the five-star rating, including reliance on focus groups, see National Academies, 2002, pp. 68–71. The government replaced numerical ratings with star ratings after a 1992 Senate and Conference Appropriations report asked that methods be improved for informing consumers of the comparative safety of new models. For ratings history, see Government Accountability Office, 2005b, pp. 10–12.
  102. The final rule was published in 66 Fed. Reg. 3388–3437 (January 12, 2001).
  103. 66 Fed. Reg. 66190 (proposed December 21, 2001) (codified at subpt. C 49 C.F.R . pt. 579).
  104. 66 Fed. Reg. 65536 (proposed December 19, 2001) (to be codified at 49 C.F.R. pts. 567, 571, 574, and 575).
  105. These requirements are set forth in 49 U.S.C. §30117(c).
  106. Consumer Information; New Car Assessment Prog ram; Rollover Resistance, 68 Fed. Reg. 59250 (October 14, 2003) (codified at 49 C.F.R . pt. 575). The dynamic rollover test complemented but did not replace the government’s initial static test. The National Academies of Sciences’ recommendations are set forth in National Academies, 2002.
  107. Stars on Cars Act of 2005, S. 560, 109th Cong. (2005).
  108. New-model rollover ratings are listed by the National Highway Traffic Safety Administration at http://www.safercar.com. In 2004, there were 4.3 million visits to the ratings Web site. Government Accountability Office, 2005b, p. 15.
  109. Government Accountability Office, 2005b, p. 2.
  110. Government Accountability Office, 2005b, p. 26.
  111. Congress directed regulators to issue a minimum performance standard for rollovers by 2009. Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), Pub. L . 109–59, August 10, 2005, 119 Stat. 1144 (codified in scattered sections of 18 U.S .C .A., 23 U.S .C .A., and 49 U.S.C.A) (West, Westlaw through Pub. L . 109 –169). See Federal Motor Vehicle Safety Standards; Roof Crush Resistance, 70 Fed. Reg. 49223 (proposed August 23, 2005) (to be codified at 49 C.F.R. pt. 571).
  112. Government Accountability Office, 2005b, p. 36.
  113. Government Accountability Office, 2005b, pp. 27–28.
  114. Instructions for accessing backup crash test data are given at http://www.safercar.gov/pages/ResourcesLinksDCR.htm.
  115. For a general critique of auto safety star ratings, see National Academies, 1996, pp. 65–73, and Government Accountability Office, 2005b, pp. 27–28.
  116. Homeland Security Presidential Directive 3 (HSPD-3), March 12, 2002, http://www.fas.org/irp/offdocs/nspd/hspd-3.htm, as amended by Homeland Security Presidential Directive 5 (HSPD-5), February 28, 2003, http://www.fas.org/irp/offdocs/nspd/hspd-5.html and http://www.dhs.gov/dhspublic/display?content=4331 (sites accessed May 22, 2006).
  117. Remarks by Governor Ridge at Announcement of Homeland Security Advisory System, March 12, 2002, http://www.whitehouse.gov/news/releases/2002/03/20020312-11.html (site accessed May 22, 2006).
  118. Homeland Security Act of 2002, Pub. L . 107 –296, Title II, Subtitle A, §201(d)(7), 116 Stat. 2135, 2146 (codified at 6 U.S.C. §§101 et seq. (Supp. III 2003)).
  119. Homeland Security Presidential Directive 3 (HSPD-3), March 12, 2002.
  120. Pub. L . 107 –296, Title II, Subtitle A, Section 201(d)(7).
  121. Alerts are summarized at http://www.dhs.gov/dhspublic/interapp/editorial/editorial_0844.xml.
  122. Congressional Research Service, 2003, pp. 1–4.
  123. General Accounting Office, 2004, p. 13.
  124. Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction, 2003.
  125. Congressional Research Service, 2003, pp. 4–5.
  126. General Accounting Office, 2004, pp. 4–5 and 12–14.
  127. General Accounting Office, 2004, p. 18.
  128. General Accounting Office, 2004, p. 18.
  129. Senate Governmental Affairs Committee, 2003.
  130. General Accounting Office, 2004, p. 13.
  131. Attorney General Ashcroft, Director Ridge Discuss Threat Level, September 10, 2002 (White House transcript).
  132. Philip Shenon,”Threats and Responses: Domestic Security,” New York Times, June 6, 2003, p. A15.
  133. “Analysis: Congressional Hearings on Terror Alert System,” Morning Edition (National Public Radio), broadcast February 5, 2004.
  134. Philip Shenon, “Report Finds Threat Alerts in Color Code Baffle Public,” New York Times, August 10, 2003, p. A18.
  135. Council for Excellence in Government, from the Home Front to the Front Lines: America Speaks Out about Homeland Security (a Hart-Teeter poll, March 2004).
  136. Fox News polls, July 2002 and February 2003.
  137. Philip Zimbardo, “Phantom Menace,” Psychology Today, June 2003, pp. 34–36.
  138. One legislative reaction was passage of the Taft-Hartley Act of 1947, which set out sweeping amendments to the National Labor Relations Act. Among other features, the law described a new set of unfair labor practices for unions, including prohibitions against secondary boycotts and other forms of concerted activities by unions, as wel l as new employer rights to counter union organizing activities. Gross, 1981.
  139. Newspapers and radio covered the hearings closely and a number of rising political figures of the day – including John F. Kennedy and Robert F. Kennedy – made
  140. Concern about the LMRDA violating union officers’ Fifth Amendment rights under the Constitution is discussed in Robb, 1961. A pessimistic view from the time concerning the prospects for improving internal union democracy through government intervention can be found in Petro, 1959.
  141. For a classic discussion of the legal obstacles facing labor union representation under the National Labor Relations Act, see Weiler, 1983.
  142. Unions representing U.S . Postal Service workers are covered by the LMRDA. Other federal workers became covered by comparable standards in the Civil Service Reform Act of 1978 and the Foreign Service Act of 1980. Civil Service Reform Act of 1978, Pub. L . 95 –454, October 13, 1978, 92 Stat. 1111 (codified at 5 U.S.C . §§1101 et seq. (2000)); Foreign Service Act of 1980, Pub. L . 96–465, October 17, 1980, 94 Stat. 2071 (codified at 22 U.S.C. §§3901 et seq. (2000)).
  143. Reporting requirements were reduced for small unions, in part because of requirements of the Paperwork Reduction Act. Rather than filling out the detailed Form LM-2, union entities with total annual receipts of less than two hundred thousand dollars were allowed to use the simplified Form LM-3 to report financial activities. Unions with annual receipts of les s than ten thousand dollars of annual receipts were allowed to file the more abbreviated Form LM-4 (adopted in 1992 and put into effect in January 1994).
  144. General Accounting Office, 2000. The report also cites other reasons why unions face minimal incentives for timely reporting (e.g ., cases against union entities with receipts under five thousand dollars are not even initiated until they have been delinquent filers for three consecutive years). Further, in cases where unions provided deficient information, the agency used voluntary methods to handle 90% of the cases and took no action regarding the remaining cases.
  145. Interviews with Hank Guzda, U.S . Department of Labor, Office of Labor/ Management Services, April 1, 2002; David Geiss, Industrial Relations Specialist, U.S. Department of Labor, Office of Labor/Management Services, April 1, 2002.
  146. See General Accounting Office, 1999.
  147. See Office of Management and Budget, Exec. Office of the President, Budget of the United States Government, Fiscal Year 2001 (2000), and Office of Management and Budget, Exec. Office of the President, Budget of the United States Government, Fiscal Year 2006 (2005). The Department of Labor’s budget for fiscal year 2006 can be found at http://www.dol.gov/_sec/budget2006/overview.pdf (site accessed June 4, 2006). Information regarding the Department of Labor’s budget for fiscal year 2001 can be found at http://www.dol.gov/sec/budget/budget01.htm (site accessed June 4, 2006). The budget of the U.S. government in its entirety can be accessed through http://www.gpoaccess.gov/usbudget/fy06/index.html for fiscal year 2006 and through http://www.gpoaccess.gov/usbudget/fy01/index.html for fiscal year 2001.
  148. Similar changes to the LMRDA were introduced by President George H. W. Bush in 1992 but then rescinded by President Bill Clinton upon taking office in 1993. See Exec. Order No. 12,800, 57 Fed. Reg. 12985 (April 13, 1992), as corrected 57 Fed. Reg . 13413 (April 16, 1992), revoked by Exec. Order. No. 12836, 58 Fed. Reg. 7045 (dated February 1, 1993, and published February 3, 1993).
  149. See Labor Organization Annual Financial Reports, 68 Fed. Reg. 58374 (October 9, 2003) (codified at 29 C.F.R. pts. 403, 408). This five-thousand-dollar figure
  150. See Labor Organization Annual Financial Reports, 68 Fed. Reg . 58374 . The new reporting requirement became effective in 2004.

This work is licensed under the Creative Commons Attribution 3.0 Unported License. This page must provide all available authorship information.

Public domainPublic domainfalsefalse