Jump to ContentJump to Main Navigation
Show Summary Details

Accounting, Economics, and Law: A Convivium

Ed. by Avi-Yonah, Reuven S. / Biondi, Yuri / Sunder, Shyam

3 Issues per year

Online
ISSN
2152-2820
See all formats and pricing

Evaluating Accounting Standards: A Comment on Ramanna’s ‘The International Politics of IFRS Harmonization’

Paul E. Madsen
  • Corresponding author
  • University of Florida, Po Box 117166 210 GER, Gainesville, Florida 32611, United States
  • Email:
Published Online: 2013-06-06 | DOI: https://doi.org/10.1515/ael-2013-0031

Abstract: Ramanna (this issue) argues that the big question at the centre of IFRS research is: “is the political process underlying IFRS facilitating the production of economically efficient standards?”, and presents new evidence that is informative about the political process underlying IFRS adoption decisions. In this comment, I explore how questions about the economic efficiency of standards might come to be answered. The current expertise-based approach to the evaluation of accounting standards is likely limited because it requires the deployment of expertise where it is likely least valuable, in low-validity/low feedback environments. I propose regulatory field experimentation as a potential alternative means of accumulating the knowledge required to perform effective expertise-based evaluations of standards. But field experimentation could be costly and politically controversial. The proposal to permit multiple accounting standard-setters to compete in a given jurisdiction may be an effective means of reducing the reliance of the evaluative system on expertise, replacing it with market feedback.

Keywords: accounting standards; field experiments; regulatory competition

List of symposium papers

  • 1

    “The International Politics of IFRS Harmonization” by Karthik Ramanna DOI 10.1515/ael-2013-0004

  • 2

    “The International Politics of IFRS Harmonization: A Comment” by Shizuki Saito DOI 10.1515/ael-2013-0001

  • 3

    “Towards a Comprehensive Appraisal of Global Accounting Harmonization: About the “Desirability” of IFRS – A Comment on Ramanna’s “The International Politics of IFRS Harmonization”” by Jerome Haas DOI 10.1515/ael-2013-0013

  • 4

    “Comment on ‘The International Politics of IFRS Harmonization’” by Andreas Nolke DOI 10.1515/ael-2013-0003

  • 5

    “Evaluating Accounting Standards: A Comment on Ramanna’s ‘The International Politics of IFRS Harmonization’” by Paul E. Madsen DOI 10.1515/ael-2013-0031

1 Introduction

Ramanna (this issue) argues that the big question at the centre of International Financial Reporting Standards (IFRS) research is: “is the political process underlying IFRS facilitating the production of economically efficient standards?”, and presents evidence that is informative about the political process underlying IFRS adoption decisions. I agree that Ramanna’s big question is a fascinating one and use this comment to explore how it might come to be answered. The evidence in Ramanna (this issue) speaks to the political process underlying IFRS, the first part of Ramanna’s big question. This represents a valuable contribution because it illustrates the power of political variables to explain variation in country-level IFRS adoption decisions. More research of this sort will be needed to answer the first part of Ramanna’s big question.

I begin this comment by arguing that financial reporting regulators, standard-setters, accounting practitioners, and researchers lack the knowledge they would need to evaluate the economic efficiency of accounting standards, the second part of Ramanna’s big question.1 The “knowledge problem” impeding evaluation of accounting standards is in large part a consequence of the nature of the standard-setting task, in which centralized policymaking is deployed to impact a fuzzy set of social and/or economic outcomes in the extremely complex environment of our modern global economy. Many centralized policymakers perform similar tasks facing similar constraints; how do they deal with the knowledge problem? The most common approach is to rely on experts or panels of experts whose knowledge is expected to give them insight into the likely causal relationships between policy tools and outcome variables. While expert skill can be very impressive under certain conditions, available evidence suggests that it is likely insufficient to produce net beneficial accounting standards because of the complexity of the task and the limited availability of feedback about the quality of prior decisions. Why, then, have the world’s financial reporting regulators increasingly gravitated toward systems for setting accounting standards that rely heavily on the intuitive power of experts? I argue that the political costs of publicizing the knowledge problem and asking financial statement preparers and users to bear the costs required to solve it may explain why accounting regulators have not pursued knowledge-generating systems of regulation.

Recognizing the limits of their expertise, a number of non-accounting policymakers have used large-scale field experiments specifically to facilitate evaluation of their policies. Field experiments involve testing the impacts of a proposed policy intervention in the field, rather than in the experimental laboratory or in the minds of experts performing thought experiments, by constructing groups of subjects, at a minimum including a control and treatment group, applying the proposed policy to the treatment but not the control group, and observing how outcomes differ between the groups of subjects (Greenberg & Shroder, 2004). Experimentation by centralized policymakers confronts the knowledge problem directly by providing causal information about the relationships between a given policy and its purported costs and benefits. While accounting standard-setters have demonstrated some willingness to perform small-scale field studies, the “knowledge problem” with accounting standard-setting is likely to persist because of political barriers discouraging the implementation of large-scale regulatory experimentation and other potential solutions to the knowledge problem. The proposal to allow multiple competing financial reporting standard-setters in the United States is a policy innovation that could partially sidestep the knowledge problem. A competitive system could potentially facilitate the identification of which standards among a collection of available choices are relatively efficient (from the perspective of firms) though market feedback, reducing the extent to which the system relies on the powers of expert judgment to identify efficient standards.

Ramanna’s (this issue) main contribution is to provide some insight into the decision-making of national financial reporting regulators in the context of IFRS adoption decisions. Research of this kind is essential if we are to understand the properties of accounting standards because the structure of accounting standard-setting institutions is determined by political processes. My conclusion in this comment is that a robust understanding of the net economic and social effects of accounting standards is likely out of reach unless standard-setters specifically design policies to study them. Answering Ramanna’s big question will likely be impossible unless we first answer a smaller question: what would it take to convince financial reporting regulators and/or accounting standard-setters to take self-evaluation seriously?

2 Accounting standard-setters’ knowledge problem

Accounting standard-setters agree that evaluation of the quality of standards is a critical part of effective standard-setting, as evidenced by their support, in principle at least, of cost/benefit analyses for each new standard (FASB, 2012; IASB, 2010).2 An ideal cost/benefit analysis of a proposed standard would likely involve three steps: (1) identify a list of the potentially material types of both direct and indirect costs and benefits, (2) forecast their magnitudes, and (3) compare the aggregate forecasted costs with the aggregate forecasted benefits. However, it is widely recognized in the accounting and legal literatures and in public statements by the standard-setters themselves that steps 1 (identify the costs and benefits) and 2 (forecast their magnitudes) are not possible given the current state of accounting knowledge (Easterbrook & Fischel, 1991; FASB, 1991; IASB, 2010; Watts & Zimmerman, 1986).

Because they admittedly cannot systematically evaluate the net economic effects of their standards, standard-setters rely instead on the expert judgment of board members, and comments solicited from outside parties, to assess whether new standards are likely desirable. These expertise-based assessments are what constitute the “Benefits and Costs” section of new standards.3 Given that accounting standard-setting boards are populated by well-regarded experts, perhaps they are able to accurately identify important costs and benefits of new standards, forecast their magnitudes and determine when a proposed standard will likely be net beneficial. Unfortunately, decades of research on “clinical” versus “statistical” predictions in a wide array of human endeavours suggests that expertise in accounting is likely not sufficient for accurately predicting the costs and benefits of proposed accounting standards (Kahneman, 2011; Kahneman & Klein, 2009; Meehl, 1954; Tetlock, 2005). The primary reason is that performing a cost/benefit analysis of a proposed standard requires standard-setters to make complex economic predictions (What are the expected costs/benefits? How large are they likely to be?) in situations where diagnostic feedback about the quality of their prior predictions is generally unavailable. These conditions, namely a low-validity/low-feedback environment, have been shown to limit the quality of expert judgment, to the extent that experts are routinely outperformed by simple quantitative models that make forecasts based on only the subset of the information sets used by the experts that could be readily quantified (Kahneman, 2011; Kahneman & Klein, 2009; Meehl, 1954; Tetlock, 2005).4,5

Estimating the costs and benefits of a proposed standard is a complex problem, as are the problems studied in the literature on clinical versus statistical prediction. But the task of evaluating accounting standards differs from the tasks studied in that literature. If an ambitious investigator were to embark on a study of clinical versus statistical prediction of the net social impact of a proposed accounting standard, the investigator would discover that existing literature does not specify a comprehensive set of variables to include in a relevant quantitative model, and would find that the technology to make the measurements required to test whether the experts or the models made better predictions does not exist. In other words, though possible in theory, the task of performing cost/benefit analyses of accounting standards is rendered impossible in practice by the lack of quality theories to guide the construction of prediction models and the absence of the empirical technology required to assess the quality of prior predictions.

Research suggesting that experts struggle with prediction in low-validity/low-feedback environments has been available to the world’s financial reporting regulators, but many of them have nevertheless structured accounting standard-setting institutions to rely on expert judgment. It is not clear why this is so and understanding it will likely require further insight into the political process of designing standard-setting institutions. One potential explanation is that accounting standard-setting organizations may be designed to serve purposes other than the promulgation of economically efficient accounting standards. Indeed, accounting standard-setting institutions can be viewed as buffer organizations intended to insulate parent organizations from the public pressure and controversy that seem to be an inevitable part of accounting standardization, a political purpose Horngren (1972) argues the APB served for the SEC. If they serve as political buffers, the usefulness of accounting standard-setting organizations may depend more on their ability to manage conflicts among constituents than on their ability to produce efficient standards and their robustness to political threats may depend more on their effectiveness at promulgating standards that are maximally “socially acceptable” than on the economic efficiency of their standards (McLeay, Ordelheide, & Young, 2000, p. 79). In other words, the selection process shaping accounting standard-setting organizations may favour institutional structures and practices that effectively manage outsiders’ perceptions of the standard-setter’s legitimacy above structures and practices that promote efficient standard-setting and effective ex post evaluation of existing standards (Suchman, 1995).

In summary, accounting standard-setters cannot perform traditional, quantitative cost/benefit analyses so they substitute judgment-based analyses. Existing evidence suggests that their judgment-based analyses are likely insufficient to distinguish net beneficial from net costly standards. The missing ingredient whose absence makes effective evaluation of accounting standards impossible is knowledge, specifically knowledge about the sources of material costs and benefits of standards and their likely magnitudes as they are experienced by the many diverse producers and consumers of accounting information. To the extent that such knowledge exists, it is dispersed, idiosyncratic and continually evolving, which make it difficult to collect and aggregate (Hayek, 1945). The knowledge problem could be addressed directly, by improving the knowledge of accounting experts serving on standard-setting boards, or indirectly, by restructuring standard-setting institutions so that the evaluation of standards relies less on expert knowledge.

3 Field experiments could potentially relieve the accounting standard-setters’ knowledge problem

Solving the standard-setters’ knowledge problem directly through the discovery of new evaluation-relevant knowledge is complicated by practical constraints limiting applicable research opportunities. The informativeness of quasi-experiments on topics relevant for the evaluation of accounting standards is frequently limited by small sample-sizes, limited observable variation and uncontrolled confounds (Ball 2008; Hail, Leuz, & Wysocki, 2010; Jamal, Maier, & Sunder, 2003; Kachelmeier & King, 2002). Laboratory experiments offer a means of dealing with uncontrolled confounds, but to be tractable, frequently involve significant simplification and abstraction away from the policy questions faced by standard-setters (Abdel-khalik, 1994; Kachelmeier & King, 2002). These conditions are not unique to accounting policymakers and, as already discussed, a frequent outcome in such situations is that policy is determined by experts using judgment. But in some cases, frequently when they encounter other experts with whom they disagree, policymakers use their policymaking tools to create new knowledge about how their interventions influence outcomes of interest. These “field studies” can be viewed as a “bridge between laboratory and naturally-occurring data in that they represent a mixture of control and realism usually not achieved in the lab or with uncontrolled data, permitting the analyst to address questions that heretofore were quite difficult to answer” (Levitt & List, 2009, p. 2). Field studies may, therefore, offer a means of easing the knowledge problem of accounting standard-setters because they can establish, with a higher degree of certainty than previously employed methods, how features of policy impact relevant outcomes in the real world.

Large-scale field experiments would not eliminate the need for expert judgment. The designers of field experiments have to make decisions about which policies to study, among which groups their impacts will be measured, and what sorts of impacts deserve attention. In practice, these decisions are made by groups of experts. If, as I have argued previously, expert judgment is likely insufficient to distinguish net beneficial standards from net costly standards, why should we expect that expert judgment is sufficient to execute effective field experiments? My answer is that relatively more is understood about how to conduct effective field experiments than is understood about how to subjectively evaluate accounting standards. Cases where centralized policymakers have used their policy tools to generate new knowledge in an effort to improve future policymaking are many. Examples are reviewed for economics in Levitt and List (2009) and Card, DellaVigna, and Malmendier (2011), for political science and development in Humphreys and Weinstein (2009), for criminology in Sherman, Farrington, Welsh, and MacKenzie (2002), for micro-banking in Karlan and Appel (2011), and for the management of firms in Bandiera, Barankay, and Rasul (2011). Greenberg and Shroder (2004) is a broad review cutting across many disciplines. Field experimentation by accounting standard-setters can be viewed, therefore, as the deployment of expertise where it is relatively valuable (the design and execution of field experiments) as a means of improving the quality of judgments where the power of expertise is relatively limited (the evaluation of accounting standards).

4 Why don’t standard-setters use large-scale field experiments?

Centralized decision makers in many fields have shown that large-scale field experiments can be a profitable means of gathering evaluation-relevant information, and accounting standard-setters admit that they lack such information (FASB, 1991; IASB, 2010). But accounting standard-setters have not used large-scale field experiments to relieve their knowledge problem.6 In this section, I propose reasons why this might be so: cost, expert confidence and concerns about fairness.

Conducting field experiments can be a costly endeavour. In financial reporting, field experiments would inevitably reduce, at least in the short-term, the comparability of financial statements, one of the purportedly significant benefits of financial reporting regulation. Firms incur implementation costs any time a new standard is issued, costs they would likely be less willing to bear if they knew that the change is experimental and may be temporary. There are also typically significant costs involved in designing a field experiment, implementing it and measuring the constructs of interest. Given such costs, field experiments themselves must also be subjected to a judgment-based cost/benefit analysis. Before a large-scale field experiment can be carried out, standard-setters and their constituents must come to expect that the long-term gains from conducting the experiment justify the short-term costs.

Experts are defined as people who are skilled or knowledgeable about a particular field and they frequently show remarkable intuitive powers in their domain of expertise (see Gobet & Charness, 2006; Larkin, McDermott, Simon, & Simon, 1980; Shenk, 2010). Experts are justifiably confident about the accuracy of their domain-specific expectations, but the psychology literature suggests that experts and non-experts alike have difficulty estimating the accuracy of their predictions; they tend to be confident in situations where they are competent but don’t adequately adjust down their level of confidence in situations where they are less competent (Kahneman, 2011; Lichtenstein & Fischhoff, 1977; Plouse, 1993). Overconfidence could potentially explain the frequent statements by standard-setters and other expert commentators that IFRS adoption is, or would be, economically beneficial despite the limited and inconsistent evidence supporting this view (Dzinkowski, 2012; FCAG, 2009; G-20, 2009; ICAEW, 2009; SEC, 2010).7 In addition, if standard-setters are unjustifiably confident in the quality of their standards, their subjective cost/benefit analyses may be biased. Overconfident standard-setters may be unlikely to recognize the potential for biased assessments of their own work and fail to see a need to take additional steps to facilitate a better understanding of the links between accounting standards and economic and social outcomes.

To maximize their internal validity, field studies frequently randomly assign subjects to differing experimental conditions. While random assignment is a valuable means of strengthening the inferences that can be drawn from an experiment, it can be controversial because it is perceived among many experimental subjects (Erez, 1985), academics and policy makers (Cook & Payne, 2002) as unfair. One can imagine that firms would not react well if they were randomly assigned by the IASB or FASB to use, or not to use, an experimental financial reporting method that materially impacts their financial statements. As a consequence of the repugnance of random assignment, even in situations where all parties agree that there is a need to learn more about how policy tools impact important outcomes, implementing field experiments would likely be controversial. There would have to be a consensus among firms and regulators that the long-term benefits of improved knowledge about the effects of standards outweigh the short-term costs imposed by random assignment. Building such a consensus would liifficult task.

While large-scale field experiments could potentially produce valuable new evaluation-relevant knowledge, this knowledge would inevitably be provisional because accounting standards are just a piece of a broad institutional context influencing the production of financial statements (Ball, Kothari, & Robin, 2000; Ball, Robin, & Wu, 2003). Causal knowledge produced by accounting standard-setting field experiments would likely become outdated as changes in other parts of the institutional context influenced the demand for accounting information. As a result, accounting standard-setters would need to maintain an ongoing project of field experimentation to preserve the quality of their evaluation-relevant knowledge.

Accounting standard-setters would have to overcome significant barriers before they could implement field experiments. But similar barriers exist for all field experiments carried out by centralized policymakers. Given that all field experiments are costly and many still occur, the evidence suggests that there are many cases in which the barriers to field experimentation have been surmounted. A tendency toward overconfident predictions is common among all people, yet centralized decision-makers in many domains have been convinced that they need the new knowledge available from field experiments to inform their decisions. All randomized experiments are in a real sense unfair. Pricing experiments inevitably give unequal treatment to customers, medical experiments inevitably give unequal treatment to patients, education experiments inevitably give unequal treatment to students, experiments in charitable giving inevitably give unequal treatment to the needy, and so on (Harford, 2011). But field experiments employing random assignment have been carried out in these fields anyway. The barriers to experimentation by accounting standard-setters are certainly significant. But they are not necessarily insurmountable.

5 Conclusions

Ramanna (this issue) presents evidence suggesting that international political forces influence national IFRS adoption decisions and argues that the big question in IFRS research is whether the political process underlying the production of IASB standards facilitates the development of efficient standards. In this comment, I offer a pessimistic view on the likelihood of producing efficient, or even net beneficial, standards given our existing framework for setting accounting standards and our poor understanding of the types of material costs and benefits and their magnitudes. I argue that large-scale regulatory field experiments, which have successfully facilitated the evaluation of regulations in many non-accounting domains, are a potential means of gathering the information that would be required for standard-setters structured like the FASB and IASB to produce net beneficial standards. However, convincing standard-setters of the need for more information, and convincing the broad accounting community that the cost of acquiring such information is justified, would likely be a political challenge.

Field experimentation is a method that could help accounting standard-setters as they are currently structured, centralized bodies of experts writing uniform standards for all public firms, collect evaluation-relevant knowledge. But, because they are likely to be costly and controversial, field experimentation within the existing institutional framework may not be feasible. Other methods for producing evaluation-relevant knowledge and increasing the efficiency of accounting standards could involve significantly changing the institutional structure of accounting standard-setting as a means of reducing the reliance of accounting standard evaluation on expert judgment. One potentially fruitful approach that has been proposed in the accounting literature is competition between two or more accounting standard-setters (Hail et al., 2010; Kothari, Ramanna, & Skinner, 2010; Sunder, 2002). Standard-setter competition is a reform that could potentially improve the efficiency of accounting standards, even if it did not generate new evaluation-relevant knowledge, as market feedback in the form of greater demand for one set of standards over another may encourage standard-setters to maximize the net benefits of their standards, at least from the perspective of firms. Standard-setter competition could also produce evaluation-relevant knowledge if observable firm choices are informative about the features of standards that make them attractive to firms (Sunder, 2010).

The existence of standard-setter competition in other economic domains suggests that it could be a feasible approach in accounting (Jamal & Sunder, 2011). But like many features of standard-setting systems, standard-setter competition would involve uncertainties and trade-offs. The most obvious trade-off of competition relative to the current system for setting accounting standards is a probable reduction in financial statement comparability. The consequences of standard-setter competition for economic efficiency would depend on the extent to which the choice enabled managers to run their firms more efficiently, perhaps by making use of their relatively rich knowledge of their firms’ particular local circumstances when choosing among available accounting standards (Hayek, 1945), and on the extent to which the preferences of managers in firms correlate with the preferences of other parties impacted by their choice of accounting standards, with higher correlations leading to greater macro-level efficiency. Relative to field studies, the data produced by standard-setter competition would likely be less diagnostic because, with competition, firms self-select into conditions. But standard-setter competition would likely reduce the extent to which the evaluation of standards would require drastic improvements in accounting knowledge. Standard-setter competition may also be less costly to implement than field studies if firms have incentives to make pro-social use of the discretion standard-setter competition would afford them (Dye & Sunder, 2001).

My primary argument is that expertise-based systems for setting accounting standards could work better if the experts running them had better methods of observing and learning from the outcomes of their decisions. But developing and deploying such methods would likely be difficult. Standard-setter competition is an alternative to our current system for setting accounting standards that demands less of those setting standards because feedback about the impacts of prior decisions, in the form of market-based changes in the demand for a given system of standards, is baked into the system.

The spread of IFRS, the design of accounting standard-setting bodies, and, as I argue here, building the consensus required to produce information to evaluate accounting standards, are all likely the outcomes of political processes about which little is known. Much more work like Ramanna’s (this issue) will be needed to get answers to our big questions about how political forces impact the design of accounting standard-setting systems and about how alternative standard-setting systems might impact economic efficiency and social justice.

Acknowledgements

For their helpful comments on earlier drafts of this comment, I thank Yuri Biondi, an anonymous referee, and participants in the 2012 Introduction to Accounting Research doctoral seminar at the University of Florida: William Ciconte, Matthew Driskill, Nadine Funcke, Andrew Kitto, Andy Liu, Eddie Thomas, Angie Wang, Devin Williams and Ying Zhou.

References

  • Abdel-khalik, A. R. (1994). Factors limiting the role of behavioral research in standard setting. Behavioral Research in Accounting, 6(Supplement), 213–222.

  • Aegisdottier, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., … Rush, J. D. (2006). The meta-analysis of clinical judgment project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counseling Psychologist, 34(3), 341–382.

  • Ball, R. (2008). What is the actual economic role of financial reporting? Accounting Horizons, 22(4), 427–432. [Crossref]

  • Ball, R., Kothari, S. P., & Robin, A. (2000). The effect of international institutional factors on properties of accounting earnings. Journal of Accounting and Economics, 29(1), 1–51.

  • Ball, R., Robin, A., & Wu, J. S. (2003). Incentives versus standards: Properties of accounting income in four East Asian countries. Journal of Accounting and Economics, 36(1–3), 235–270.

  • Bandiera, O., Barankay, I., & Rasul, I. (2011). Field experiments with firms. Journal of Economic Perspectives, 25(3), 63–82. [Crossref]

  • Card, D., DellaVigna, S., & Malmendier, U. (2011). The role of theory in field experiments. Journal of Economic Perspectives, 25(3), 39–62. [Crossref]

  • Cook, T. D., & Payne, M. R. (2002). Objecting to the objections to using random assignment in education research. In F. Mosteller and R. Boruch (Eds.), Evidence matters: Randomized trials in education research (pp. 150–178). Washington, DC: Brookings Institution Press.

  • Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668–1674. [PubMed] [Crossref]

  • Dye, R. A., & Sunder, S. (2001). Why not allow FASB and IASB standards to compete in the U.S.? Accounting Horizons, 15(3), 257–271. [Crossref]

  • Dzinkowski, R. (2012). How the crisis changed IFRS forever. CFO Insight, July 18, 2012.

  • Easterbrook, F. H., & Fischel, D. R. (1991). The economic structure of corporate law. Cambridge, MA: Harvard University Press.

  • Erez, E. (1985). Random assignment, the least fair of them all: Prisoners’ attitudes toward various criteria of selection. Criminology, 23(2), 365–379. [Crossref]

  • European Financial Reporting Advisory Group (EFRAG). (2012). Position paper: Considering the effects of accounting standards. Retrieved from http://www.efrag.org/files/Effects/120717_Final_Position_Paper_ES.pdf

  • Financial Accounting Standards Board (FASB). (1991). Benefits, costs, and consequences of financial accounting standards. Financial Accounting Standards Board Special Report. Norwalk, CT: FASB.

  • Financial Accounting Standards Board (FASB). (2010). Outreach and field testing related to the July 2010 Staff Draft. Financial Statement Presentation. Retrieved November 2012 from http://www.fasb.org/cs/ContentServer?c=Document_C&pagename=FASB%2FDocument_C%2FDocumentPage&cid=1176157168799

  • Financial Accounting Standards Board (FASB). (2012). Rules of procedure. Norwalk, CT: FASB.

  • Financial Crisis Advisory Group (FCAG). (2009). Report of the financial crisis advisory group. Retrieved November 2012 from http://www.fasb.org/cs/ContentServer?c=Document_C&pagename=FASB%2FDocument_C%2FDocumentPage&cid=1176156365880

  • G-20. (2009). Leader’s statement: The Pittsburgh Summit. Retrieved November 2012 from http://www.treasury.gov/resource-center/international/g7-g20/Documents/pittsburgh_ summit_leaders_statement_250909.pdf

  • Gobet, F., & Charness, N. (2006). Expertise in chess. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance. New York, NY: Cambridge University Press.

  • Goldberg, L. R. (1976). Man versus model of man: Just how conflicting is that evidence? Organizational Behavior and Human Performance, 16(1), 13–22. [Crossref]

  • Greenberg, D., & Shroder, M. 2004. The digest of social experiments (3rd ed.). Washington, DC: The Urban Institute Press.

  • Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1), 19–30. [Crossref] [PubMed]

  • Hail, L., Leuz, C., & Wysocki, P. (2010). Global accounting convergence and the potential adoption of IFRS by the U.S. (Part II): Political factors and future scenarios for U.S. accounting standards. Accounting Horizons, 24(4), 567–588. [Crossref]

  • Hakeem, M. (1948). The validity of the Burgess method of parole prediction. American Journal of Sociology, 53(5), 376–386. [Crossref]

  • Harford, T. (2011). Adapt: Why success always starts with failure. New York, NY: Picador.

  • Hayek, F. A. (1945). The use of knowledge in society. The American Economic Review, 35(4), 519–530.

  • Horngren, C. T. (1972). Accounting principles: Private or public sector? Journal of Accountancy, 133(5), 37–41.

  • Humphreys, M., & Weinstein, J. M. (2009). Field experiments and the political economy of development. Annual Review of Political Science, 12, 367–378. [Crossref]

  • Institute of Chartered Accountants in England and Wales (ICAEW). (2009). Comment letter regarding Security and Exchange Commission’s Roadmap for the application of International Financial Reporting Standards. Retreived November 2012 from http://www.sec.gov/comments/s7-27-08/s72708-189.pdf

  • International Accounting Standards Board. (2010). The conceptual framework for financial reporting. London, UK: IASB.

  • Jamal, K., Maier, M., & Sunder, S. (2003). Privacy in E-commerce: Development of reporting standards, disclosure, and assurance services in an unregulated market. Journal of Accounting Research, 41(2), 285–309. [Crossref]

  • Jamal, K., & Sunder, S. (2011). Monopoly versus competition in setting accounting standards. Working paper.

  • Kachelmeier, S. J., & King, R. R. (2002). Using laboratory experiments to evaluate accounting policy issues. Accounting Horizons, 16(3), 219–232. [Crossref]

  • Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

  • Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526. [Crossref]

  • Karlan, D., & Appel, J. (2011). More than good intentions: How a new economics is helping to solve global poverty. New York, NY: Penguin Group.

  • Kothari, S. P., Ramanna, K., & Skinner, D. J. (2010). Implications for GAAP from an analysis of positive research in accounting. Journal of Accounting and Economics, 50(2–3), 246–286.

  • Larkin, J., McDermott, J., Simon, D. P., & Simon, H. A. (1980). Expert and novice performance in solving physics problems. Science, 208(4450), 1335–1342. [PubMed] [Crossref]

  • Levitt, S. D., & List, J. A. (2009). Field experiments in economics: The past, the present, and the future. European Economic Review, 53(1), 1–18. [Crossref]

  • Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also know more about how much they know? Organizational Behavior and Human Performance, 20(2), 159–183. [Crossref]

  • McLeay, S., Ordelheide, D., & Young, S. (2000). Constituent lobbying and its impact on the development of financial reporting regulations: Evidence from Germany. Accounting, Organizations and Society, 25(1), 79–98.

  • Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and review of the evidence. Minneapolis, MN: University of Minnesota Press.

  • Plouse, S. (1993). The psychology of judgment and decision making. New York, NY: McGraw-Hill.

  • Popovics, A. J. (1983). Predictive validities of clinical and actuarial scores of the Gesell Incomplete Man Test. Perceptual and Motor Skills, 56(3), 864–866. [Crossref]

  • Ramanna, K. (this issue). The international politics of IFRS harmonization. Accounting, Economics, and Law: A Convivium, 2(1).

  • Richardson, A. J., & Eberlein, B. (2011). Legitimating transnational standard-setting: The case of the International Accounting Standards Board. Journal of Business Ethics, 98(2), 217–245. [Crossref]

  • Sarbin, T. R. (1943). A contribution to the study of actuarial and individual methods of prediction. American Journal of Sociology, 48(5), 593–602. [Crossref]

  • Securities and Exchange Commission (SEC). (2010). Commission statement in support of convergence and global accounting standards. Retrieved November 2012 from http://www.sec.gov/rules/other/2010/33-9109.pdf

  • Shenk, D. (2010). The genius in all of us. New York, NY: Doubleday.

  • Sherman, L. W., Farrington, D. P., Welsh, B. C., & MacKenzie, D. L. (Eds.). (2002). Evidence-based crime prevention. New York, NY: Routledge.

  • Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. The Academy of Management Review, 20(3), 571–610.

  • Sunder, S. (2002). Regulatory competition among accounting standards within and across international boundaries. Journal of Accounting and Public Policy, 21(3), 219–234. [Crossref]

  • Sunder, S. (2010). Adverse effects of uniform written reporting standards on accounting practice, education, and research. Journal of Accounting and Public Policy, 29(2), 99–114. [Crossref]

  • Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press.

  • Watts, R. L., & Zimmerman, J. L. (1986). Positive accounting theory. Upper Saddle River, NJ: Prentice-Hall.

  • Zeff, S. A. (1978). The rise of “economic consequences.” Journal of Accountancy, 146(6), 56–63.

Citation Information

  • Paul E. Madsen (2013), “Evaluating Accounting Standards: A Comment on Ramanna’s ‘The International Politics of IFRS Harmonization,’” Accounting Economics and Law: A Convivium, 3(2): 77–92. DOI [Crossref]

Footnotes

  • This comment is based in part on my presentation at the 2012 American Accounting Association Annual Meeting in a panel session titled “Devil’s Advocate: The Most Incorrect Beliefs of Accounting Experts.”

  • 1

    Accounting standard-setters recognize that economic efficiency is an ambitious evaluative benchmark by which to judge their work. As a practical matter, they instead aim to produce standards that are net beneficial, a lower but still ambitious benchmark. They have used the language of cost/benefit analysis (FASB, 2012; IASB, 2010) and, more recently “effect analysis” (EFRAG, 2012), when discussing their evaluative frameworks. In this comment, I use the cost/benefit evaluative framework because it has long been used by standard-setters themselves and is a lower, and hence more realistically achievable, benchmark than economic efficiency.

  • 2

    Before the 1970s, standard-setters paid little attention to the “economic consequences” of their standards, viewing their task as primarily technical and involving only “fair presentation” and sound measurement (Zeff, 1978). However, since the mid-1970s, there has been a steadily increasing emphasis by standard-setters and their constituents on the economic and social consequences of accounting standards (Zeff, 1978). The cost/benefit constraint discussed by both the FASB and IASB (FASB, 2012; IASB, 2010) is a manifestation of this trend and I assume in this comment that the broad social and economic impacts of accounting standards are relevant for evaluations of their quality.

  • 3

    FASB (1991, p. iii) puts it this way: “… the Financial Accounting Standards Board is frequently challenged to measure the expected benefits to the large and diverse community of users of financial information versus the costs of that information. In most cases, the best that can be done is conscientious judgmental assessment of costs and benefits.” (emphasis in original). IASB (2010, QC38–39) puts it this way: “When applying the cost constraint in developing a proposed financial reporting standard, the Board seeks information from providers of financial information, users, auditors, academics and others about the expected nature and quantity of the benefits and costs of that standard. In most situations, assessments are based on a combination of quantitative and qualitative information. Because of the inherent subjectivity, different individuals’ assessments of the costs and benefits of reporting particular items of financial information will vary.”

  • 4

    Examples of situations in which experts are either outperformed by simple models, or their performance is indistinguishable, include: forecasts of the future academic performance of incoming college freshman by experienced and well-trained student counselors versus a two-variable (high school rank and college aptitude test score) model (Sarbin, 1943); forecasts about whether paroled prisoners would commit further crimes by prison psychiatrists versus an unweighted model based on 21 observable prisoner characteristics like age and length of prison term (Hakeem, 1948); bankruptcy prediction by bank loan officers versus a multivariate model using five financial ratios (Goldberg, 1976); predictions of children’s intelligence scores based on the “incomplete man” test interpreted by an expert versus interpretations by an “actuarial” model (Popovics, 1983); and the prediction of geo-political events by policy and subject matter experts versus simple time-series models (Tetlock, 2005). Expert performance improves when the conditions for learning improve, which is when information signals available to them are reliable indicators of how available decisions will impact outcomes of interest and when timely feedback is available to them about the actual outcomes of decisions previously made (Kahneman & Klein, 2009).

  • 5

    Meehl (1954) is the seminal work in this literature. Reviews and meta-analyses of this literature include Dawes, Faust, and Meehl (1989); Grove, Zald, Lebow, Snitz, and Nelson (2000); and Aegisdottier et al. (2006).

  • 6

    Accounting standard-setters have used small-scale field experiments primarily to study implementation costs (FASB, 2010).

  • 7

    One cannot distinguish whether confident public statements reflect the speakers’ underlying beliefs or if they are politically motivated. For example, those involved with the IASB may wish to appear confident to outsiders as a means of building the IASB’s legitimacy as a transnational rule-maker (Richardson & Eberlein, 2011).

About the article

Published Online: 2013-06-06


Citation Information: Accounting, Economics and Law, ISSN (Online) 2152-2820, ISSN (Print) 2194-6051, DOI: https://doi.org/10.1515/ael-2013-0031. Export Citation

Comments (0)

Please log in or register to comment.
Log in