Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Nonprofit Policy Forum

Editor-in-Chief: Young, Dennis R.

Open Access
Online
ISSN
2154-3348
See all formats and pricing
More options …

Measuring Latent Constructs in Nonprofit Surveys with Item Response Theory: The Example of Political Ideology

Dyana P. Mason
  • Corresponding author
  • Department of Planning, Public Policy and Management, University of Oregon, Eugene, OR 97403-1209, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2017-03-22 | DOI: https://doi.org/10.1515/npf-2016-0020

Abstract

Latent constructs are the unobservable characteristics of individuals, groups and organizations. Although researchers use many tools to measure latent constructs, including scaled-items and factor analysis techniques, this study offers a different way to measure these characteristics in nonprofit research. Using Item Response Theory (IRT), this study develops one approach to measure revealed political ideology among leaders in nonprofit social welfare organizations. This approach can also be used to measure a variety of other constructs that may be difficult to measure through traditional approaches, opening up new lines of inquiry for those who study nonprofit organizations.

Keywords: nonprofit; measurement; methods; policy; political ideology

A New Way to Measure Latent Constructs in Nonprofit Surveys: The Example of Political Ideology

Latent constructs are the unobservable characteristics of individuals, groups or organizations, such as public opinion, ideology, preferences and values. Although researchers use many methods to measure latent constructs, including scaled-items and factor analysis techniques, this study offers nonprofit scholars one approach to measure these characteristics – Item Response Theory (IRT) – which provides a powerful methodological tool for understanding latent constructs and the resulting effects of these characteristics on individual or organizational behavior. This is particularly crucial in studies of the nonprofit sector, as a significant amount of the data generated for study relies upon survey and interview instruments (Lin and Van Ryzin 2012). Item Response Theory has been used successfully to measure latent constructs in other fields as disparate as psychology, education, political science and public administration, and can easily be applied to studies of the nonprofit sector and nonprofit organizations.

This study demonstrates a way to measure one latent construct which may be of interest to nonprofit and public sector scholars – political ideology – and provides a demonstration of the method. Political ideology has been found to have broad-reaching implications for public policy formation, advocacy, strategic decision making and organizational behavior. Using answers to a statewide survey of nonprofit executives in California, the revealed political ideology of leaders is estimated using a technique developed by Clinton, Jackman, and Rivers (2004) and Martin and Quinn (2002). To increase confidence in the measure, several consistency checks are used. This article then follows with a discussion of other applications of IRT and how the technique may be applied to other studies. Importantly, IRT is effective in measuring constructs at both the individual and aggregate (organization) levels, opening up new lines of inquiry.

Political Ideology

Studying the role of political ideology in driving political and policy outcomes has an established tradition in political science (Cox and McCubbins 1986; McCubbins, Noll, and Weingast 1987; Krehbiel 1996; Downs 1957; Epstein and O’Halloran 1999; Ferejohn and Shipan 1990; Gailmard 2009; Truman 1951; Moe 1988, 1991, 1987). It is also of interest to those who study “interest groups” (often nonprofit organizations) (Poole and Rosenthal 2007; Bonica 2013; Mason 2015; Ainsworth 1993; Ainsworth and Sened 1993; McKay 2010), as well as in public management scholarship (Clinton et al. 2012; Clinton and Lewis 2008; Bertelli and Grose 2009; Clinton, Lewis, and Selin 2014; Chen and Johnson 2015). Scholars have long been interested in the interplay between politics and political actors, institutions and ideology. However, political ideology – outside of a self-reported party affiliation – is a complex latent construct that is difficult to measure. Political ideology “helps to interpret the social world … Specific ideologies crystallize and communicate the widely (but not unanimously) shared beliefs, opinions, and values of an identifiable group, class constituency or society” (Jost, Federico, and Napier 2009, 309). Most scholars and public opinion pollsters tend to measure political ideology with a question related to party affiliation, such as Democrat, Republican or Independent, or asking where respondents fall on a unidimensional left-right political spectrum using a three, five or seven point scale (i. e. from extremely conservative to extremely liberal).

This conceptualization of ideology is consistent with the spatial theory of voting developed by Hotelling (1929) and Downs (1957). Downs, specifically, described the ways in which voters were aligned along a left-right continuum, describing how voters cast their votes for the candidate that is closest to them on that spectrum. The position of the voter on that continuum is called their ideal point, as is the position of the candidate. The ideal point, therefore, represents utility for the voter. The closer the candidate is to the voter, the more utility they receive in casting their vote for that candidate. In the competition for votes, candidates can be seen as converging on the median voter in order to secure a majority and win the election (Bertelli 2012; Poole 2005; Downs 1957) This intuition has also been used to explain voting in legislative bodies as well, where individual members of legislatures tend to converge on the median voter in crafting, and passing legislation (Downs 1957; Shepsle 1986; Poole and Rosenthal 1985; Krehbiel 2010, 1996; Cox and McCubbins 2007).

Although it has been shown that political ideology can be measured on multiple dimensions (Feldman and Johnston 2014; Layman and Carsey 2002; Zumbrunnen and Gangl 2008), this study makes use of the most common conceptualization, which is a single dimension along a left-right continuum. The left end of the spectrum is most often defined as the liberal space and is often associated with advocating social change and rejecting inequality, and the right side is associated with conservatism, characterized with rejecting social change and accepting inequality (Jost et al. 2003b, 2003a). Alford, Funk, and Hibbing (2005) developed a separate spectrum bookended by two individual-level “phenotypes”. On one end are the “contextualists” who are more aligned with liberal politics, are associated with a higher tolerance of out-groups, an optimistic view of human nature generally, an opposition to hierarchy and authority, show higher levels of empathy and a reduced demand for “punitiveness”. On the other end are “absolutists” more aligned with the normative view of conservatism, shows support for rigid moral rules, higher levels of acceptance of inequality, in-group unity and higher expectations for punitiveness.

Jennings (1992) finds that the ideology of political elites was more stable with a constrained and unidimensional set of ideological beliefs compared to the general public, and Layman and Carsey (2002) also argue that political elites tend to align themselves along a single dimension. In a study of voters from 1972–2004, Jost (2006) finds that self-reported ideology aligned along a single left-right spectrum was a strong predictor of voting. Indeed, it has been recognized that political elites, including elected and government officials, activist groups, the media and academia, frequently use the single left-right dimension of ideology in framing and debating issues (Jost, Federico, and Napier 2009; McCarty, Poole, and Rosenthal 2005).

Measuring Political Ideology

In order to measure the revealed political ideology of political actors, a growing number of scholars have turned to item response theory (IRT) (Van Der Linden and Hambleton 1997). Initially developed as a collection of tools to measure latent constructs in both educational testing and psychology (Van Der Linden and Hambleton 1997; Baker 2001; Lord and Novick 1968; Guttman 1945), IRT offers some benefits compared to factor analysis models. First, IRT modeling is based on the relationship between the individual (item) response and the individual values of the latent traits. In other words, it measures the probability that a respondent will answer a particular item in a certain way. As Treier and Jackman (2002, 2) write, “IRT models provide the answers to the questions that most social scientists are asking when they engage in measurement: what do the data say about both scores on the latent constructs and the properties of [each of] the items?”. IRT also, unlike factor analysis, can handle a mix of data and mixed data models – including binary, ordinal and continuous, allowing for more flexibility in research design and analysis (Quinn 2004).

IRT models thus provide a distinct value of an individual/item on a continuous latent trait, or their ideal point. In other words, in measuring a latent trait, in this case revealed political ideology, IRT can distinguish very slight differences between individuals, groups or organizations. Poole and Rosenthal (1997, 2007) were among the first to demonstrate measures of political ideology using IRT that went beyond party affiliation, when they generated estimates of ideology for members of Congress using a large number of floor votes. The ideology of bureaucratic agencies and federal secretaries have also been measured (Clinton et al. 2012; Bertelli and Grose 2009; Chen and Johnson 2015), while others have used similar techniques to measure the political ideology of state legislatures (Shor and McCarty 2011), local elected officials (Connolly and Mason 2016), candidates for public office (Bonica 2013), and voters (Bafumi and Herron 2010). In the public sector, scholars have also sought to measure the political ideology of large public sector agencies (Clinton et al. 2012; Bertelli and Grose 2009; Clinton and Lewis 2008; Chen and Johnson 2015), and others have attempted to measure the political ideology of “interest groups” (often nonprofit organizations) (Mason 2015; McKay 2008, 2010; Bonica 2013; Barber 2015; Poole and Rosenthal 2007).

Data and Methods

Using a Bayesian method developed by Clinton, Jackman, and Rivers (2004) and an estimation technique developed by Martin and Quinn (2002) and Martin, Quinn, and Park (2011), the political ideology of leaders in nonprofit advocacy organizations is estimated in such a way as to compare them directly with other actors – in this case, California state legislators, although members of other legislatures, including state legislatures, city Councils or Congress can be used. This study uses the MCMCpack for R statistical software to estimate leader ideology, although other methods are available. One such alternative to is to use the technique used by Poole and Rosenthal (2007, 1997) and used by others such as McKay (2008, 2010), which uses DW-Nominate a stand-alone statistical program. Stata 14 now also includes IRT models for use by scholars.

To assemble the data for analysis, first the roll call votes of eleven votes in the California legislature were selected. These votes, taken on the floor of the 2012 California legislature, comprised of six Senate bills and five Assembly bills. The full list of the bills selected can be seen in Table 1. Each bill was voted on in both houses of the legislature, and the third (substantive) reading and vote was used for analysis. These measures were selected because they had a clear partisan split in both houses of the legislature. In other words, they were largely party-line votes with Democrats on one side and Republicans on the other. They also were seen as having high political salience outside the legislature, where other political elites (nonprofit leaders) and even average California voters, are assumed to have a position on each of the issues. For example, one bill chosen was a measure on extending welfare payments to unwed teenage mothers. Another was whether or not to fund California’s controversial high-speed rail project.

Table 1:

Roll call votes from the 2012 California Legislature. The votes included were on the third (substantive) reading.

Each item (or survey question) has both difficulty and discrimination parameters on an item characteristic curve (Baker 2001). Originating in educational testing, a “difficult” question will function well among those more able. Here, it can also be conceptualized as the item’s “location on the latent trait” (Rusch et al. 2016). Discrimination, on the other hand, is how well the each item differentiates between conservative and liberal ideologies, and the steeper the curve, the better the item is able to discriminate between a liberal voter and a conservative voter (Baker 2001). The parameters estimated are the ideal points themselves, or their revealed political ideology, including a difficulty (the intercept) αj and the discrimination (slope) where the legislator i votes on the roll-call j. Roll call votes were coded 1 for “Yea” and 0 for “Nay”. The discrimination parameter βj captures the weight of change in the probability of voting yea on a roll call as the member’s ideal point θi moves from liberal to conservative. Using the two-parameter, one dimensional estimation technique developed by Martin, Quinn, and Park (2011) and Clinton, Jackman, and Rivers (2004), ideology is modeled as the unobserved utility of voting “yea” on any particular roll call vote:

zij=aj+βjθi+ϵij(1)

To help identify the model, Republicans and Democrats that consistently voted along party lines were selected in each chamber. These members’ ideal points were then constrained to −1 (=very liberal) and 1 (=very conservative). This is a common strategy in the literature (see Poole and Rosenthal 1997; Clinton, Jackman, and Rivers 2004; Martin and Quinn 2002). In the Assembly, Jim Silva (R- Huntington Beach) and Tom Ammiano (D-San Francisco) were fixed at 1 and −1, respectively. The Senators selected were Joel Anderson (R-San Diego) and Juan Vargas (D-Chula Vista), outside San Diego. The parameter estimates for the roll call votes can be seen in Table 2. The large negative values indicate that the votes show discrimination between liberal and conservative members of the legislature, and show the expected negative value because each house is made up of a majority of Democrats where a “yea” vote was the liberal choice. Estimates in each chamber are based on a thinned chain of every 20th observation from 500,000 iterations of the Gibbs sampler, with 5000 draws excluded as burnin – a total of 25,000 draws for each chamber. The models displayed substantial evidence of convergence based on trace, density, and auto-correlation plots. In both chambers, less than 10 percent of Geweke (1992) and Heidelberg and Welch (1983) diagnostic statistics failed to suggest convergence.

Table 2:

Senate and assembly bill parameter results, key votes only.

The difficulty parameter identifies the location of the proposal on the left-right scale among the legislators. As proposals move farther away from the legislator’s ideology, they represent less utility for the legislator if he or she was to vote “yea”. Therefore, a “nay” vote can also be considered a vote for the status quo. The mean value for the California Assembly was −0.94 (Democrats −0.95, Republicans 1.37). For the California Senate, the mean value was −1.04 (Democrats it was −1.05, Republicans 1.37). Overall, the full legislator mean value was −0.15 (Democrats −0.91, Republicans 1.19). With means to the left of zero, these values demonstrate the left (or liberal) leanings of the California legislature as well as its polarized, bimodal distribution.

Estimating Ideology of Nonprofit Leaders

The sampling frame for this study consisted of the identified leader (usually either the CEO, Executive Director, or in the case of smaller organizations, the Board President) of a subsection of 501(c)(4) nonprofit organizations registered in the state of California. A list of organizations was obtained from the National Center of Charitable Statistics during the summer of 2013, and comprised of 3,138 organization records. First, groups were removed that were less likely to have an advocacy or policy change mission, which included associations like Lions and Rotary Clubs (which comprised of over 1000 organizations), festivals and parade organizations, and Chinese benevolent associations (due to a language barrier). Groups that were known to be defunct were also removed, and the final sample size was 1753. Group data was manually appended with phone numbers and email addresses, where they could be found on the organization’s Form 990 or websites. Leaders were contacted with a postcard inviting them to participate in an online survey, followed by email and phone calls.

Ultimately, there were 259 eligible responses resulting in a 14.8 % response rate. Although lower than desired, the response rate is not surprising considering that the questions measuring ideology where among the first asked, which has been found to suppress responses surveys (Groves et al. 2004) to surveys. A sample analysis was conducted to determine whether there were any known and systematic differences between the organizations of respondents and those of non-respondents. The analysis found that while those groups located in urban areas were more likely to respond to the survey, there were no other statistically significant differences among organization mission (i. e. environmental groups vs. civil rights organizations), size of organization, budget, or distance from the state capitol in Sacramento. Although it’s not possible to eliminate all concern about selection bias, these results suggest that despite the low response rate, those that responded to the survey were roughly equivalent to the general population of nonprofit organizations in the sample.

To estimate the revealed ideology of nonprofit leaders, respondents were asked to take positions on issues closely corresponding to the eleven roll call votes analyzed above. The survey asked respondents a series of eleven questions corresponding with the eleven roll call votes (in which the order was randomly changed for each respondent) which stated: “We are interested to know your personal opinion about several key votes in the California legislature in the 2012 session. Your opinions may or may not be the same as your organization’s, and may or may not relate to issues your organization is concerned with. If you were a legislator, would you have supported the following measures? …”, followed by the policy questions with an option to select “yes” or “no”, or leave the item blank. The list of questions can be found in Table 3. Policy items were rotated randomly on the survey, and respondents could leave the item blank if they weren’t familiar with the issue, or if they didn’t have an opinion one way or the other. These responses were then appended to the roll call data from the California legislature. Like many similar studies (Poole and Rosenthal 1997; Bertelli and Grose 2011; Bailey 2007), it is assumed that missing votes and survey responses occurred at random, and not because of a systemic underlying differences in ideology or other characteristics.

Table 3:

Policy questions from survey instrument.

If the respondent answered “yes”, their response was coded as a “1” and a “no” was coded as a “0”. In other words, their responses on the survey were treated as if they were roll call votes. This provided a direct comparison between the ideology of the elected officials and nonprofit leaders on these eleven issues.

Findings and Consistency Checks

The process described above provided ideal point estimates for the legislature and then the legislature combined with nonprofit leaders. While the polarization of the legislators was apparent, showing the large difference between Democrats and Republicans in both houses, the nonprofits leaders are considerably more moderate, with a mean value of respondents following this procedure is 0.121, and the median is 0.11 (σ=0.60), indicating that they were slightly to the right of zero, and moderate compared to the mean and median voter in both houses of the California legislature (Senate mean −0.259, σ=1.23 of the merged sample; Assembly mean −0.295 σ=1.24). This is shown graphically in Figure 1, where the bimodal distribution of the legislature can be compared to the more normally distributed ideal points of the nonprofit leaders.

Comparison of the ideology of senate, assembly and nonprofit leaders. Political ideology is measured on a left-right continuum. As an individual moves further to the right, they can be considered more conservative (N=217 nonprofit leaders, 80 California Assembly, and 40 California Senate).
Figure 1:

Comparison of the ideology of senate, assembly and nonprofit leaders. Political ideology is measured on a left-right continuum. As an individual moves further to the right, they can be considered more conservative (N=217 nonprofit leaders, 80 California Assembly, and 40 California Senate).

Considering these were leaders of 501(c)(4) organizations, this might be considered surprising. There has long been a normative expectation that “interest groups” are extremists compared to the legislature, and apply pressure to legislators to vote on issues that are outside their own personal preferences. This was supported by Poole and Rosenthal’s (1997, 2007) early work with Item Response Theory, comparing the voting records of elected officials and the “votes” of interest groups in Washington, D.C. assembled with the use of organization scorecards (documents prepared by groups showing the voting records of legislators that support and oppose the organization’s official positions, providing a proxy for the organization’s positions on a range of issues). Their findings suggest that interest group ideology is more extreme than that of the members of Congress. However, more recent research has started to cast doubt on this assumption. Bonica (2013) demonstrated that Political Action Committees (PACs), those that provide funding for candidates running for office, are more ideologically moderate than Congress. Bafumi and Herron (2010) argued that voters are more ideologically moderate than those they’ve voted to send to Congress. Barber (2015) finds mixed results showing that PACs tend to be more moderate with their giving, while individual donors tend to support more extremist candidates for office. He speculates that PACs are more concerned about working both sides of the aisle to gain access to those that are ultimately elected. Since many 501(c)(4) groups are engaged in advocacy, it is likely that these nonprofit organizations are, like PACs, interested in access over partisan ideology. In fact, the data also show that nonprofit leaders are statistically similar to the majority of the state legislators, where 61 % of the nonprofit leaders were statistically similar to the median member of both the Senate and Assembly. In other words, despite the polarized distribution of California state legislators, nonprofit leaders tend to straddle both extremes in the center of the distribution.

Additional data on the leader’s organization, provided by the National Center for Charitable Statistics (NCCS) also helps to differentiate groups by their mission statement. For example, it was easy to compare the ideology of arts and culture organizations with animal rights groups. Figure 2 shows the mean ideology of the leaders by their organization’s mission, and does show some variance among leaders of different types of organizations. While this variance is interesting, and does provide questions for future research, this variable was primarily used for control and descriptive purposes.

Leader ideology by organizational issue field (N=208).
Figure 2:

Leader ideology by organizational issue field (N=208).

Since the wording in the survey was a simpler description of the complex legislation considered by the state legislature (to reveal the “real meaning” of the legislation, as well as reduce the noise from complex legislative language), it’s possible to argue that the survey responses and roll call votes are not comparable. In an effort to check for the consistency of the IRT-generated estimates of revealed political ideology, four tests were conducted. First, an exploratory factor analysis was conducted to see whether the items hold together as one “factor”, although Ip (2010) demonstrates that unidimensional IRT models are largely robust to modestly multidimensional constructs. Factor loadings demonstrate all eleven items hold well together as a single factor (ideology) with a resulting Cronbach Alpha of 0.90. While it’s possible more than one underlying latent factor could be present, all these items hold together as one single dimension of ideology as used in this illustration. This was confirmed by an additional multidimensionality check in MCMCPack in R. Second, the survey items by nonprofit leaders were analyzed separately, independent to the roll call votes, constraining the scores based on one respondent who was reliably “liberal” (-1) and one that was reliably “conservative” (1). The mean ideal point for all respondents – independent of legislators – was 0.121. This test indicated that all questions continued to discriminate between liberal and conservative respondents (all items discriminated in the expected “liberal” direction), where a “Yes” on any given item indicated a liberal orientation.

Finally, two additional tests were conducted similar to those provided by Bafumi and Herron (2010) in their study comparing survey questions administered online to voters to roll call votes in the U.S. Congress. In the first test, the IRT-generated estimates of political ideology were correlated to a question on self-reported political ideology. The question, “When it comes to politics do you usually think of yourself as liberal or conservative,” offered eight choices, seven of which were “extremely liberal” to “extremely conservative” on a seven-point sale with an “I don’t know” option (with zero responses). The IRT estimate for ideology correlated with this self-reported liberal to conservative model at 0.63. In addition, a series of bivariate logistic regressions were conducted, with the survey response only on each survey item as the dependent variable and the IRT-generated ideology score as the independent variable. Each item was statistically significant at p<0.01. Table 4 provides the intercepts (constants) and slopes (coefficients) for each item along with the observations for each question. The number of observations changes slightly for each item due to skipped questions. The steep slope for nearly all items indicates that there is strong discrimination between liberal and conservative respondents. The negative values indicates that, for each question where the choice was a “yes”, the more conservative the respondent was, the less likely they were to select “yes”.

Table 4:

Leader ideal points and survey responses.

The survey also asked, “Do you usually think of yourself as a Republican, a Democrat, an Independent, or other”. As Figure 3 demonstrates graphically, Democrats are left of Independents, which are left of Republicans. Ultimately, these tests provide evidence to indicate that the method demonstrated here does seem to tap into an underlying latent construct.

Political ideology of nonprofit leaders by self-reported party. As respondents move right, they can be considered more conservative (N=217).
Figure 3:

Political ideology of nonprofit leaders by self-reported party. As respondents move right, they can be considered more conservative (N=217).

Using Item Response Theory for Future Research in the Nonprofit Sector

While this paper has demonstrated one way to measure political ideology among social welfare organizations, there are many other applications of IRT that can be applied to research on nonprofit organizations and the policy arena. For one, while this specific application measured a latent characteristic at the individual level, it can also be used to measure characteristics at the group level by aggregating survey or observational data. For example, the political ideology of federal employees has been measured at the agency level (Clinton et al. 2012), and voters have been aggregated by district (Bafumi and Herron 2010) and compared with their members of Congress. These studies therefore provide a measure of a group’s or population’s overall ideology – a much more powerful tool than relying simply on public statements or voter scorecards (the method most often used for interest groups), election results or exit polling. It would be possible, for example, to measure the political ideology of a nonprofit organization in this fashion, by surveying employees or the board (or comparing the ideology of the two groups), or among organizational volunteers or donors. Are donors or volunteers more liberal than board members? What are the implications of that as the organization manages its stakeholder demands? For those organizations that are engaged in advocacy activities – it is also possible to envision a host of research questions of interest to nonprofit scholars. How does the ideology of the organization relate to its activities or mission statement? Commitment to diversity? How is the ideology of an organization’s leaders affect organizational outcomes or effectiveness? To innovation? How does the ideology of the organization, or leadership, affect strategic decisions regarding policy advocacy and implementation? These questions, too, can have far reaching implications for a nonprofit’s role in the wider political environment. One early study by Mason (2015) found that there were differences in advocacy tactics based on the political ideology of the organization’s leaders, but much more can be done.

Item Response Theory has also been used to measure other latent characteristics that are expected to have important relationships with both individual and organizational behavior. Bertelli et al. (2015), for example, measured job satisfaction, intrinsic motivation and autonomy among employees of federal agencies over time through the use of the federal surveys. Bertelli also separately studied bureaucratic turnover intention (2007) and motivation (2006). While most of these previous studies used unidimensional estimates of ideology or other characteristics, IRT can also be used to explore data in other ways, including assessing the item-related parameters themselves, the multidimensionality of various constructs and their relationship to other variables of interest. For example, multiple dimensions of volunteer motivations have already been explored (Penner 2002; Clary and Snyder 1999), and it might be possible to fine tune these measures with item response theory. Pinard’s (2011) study on the theories of social movements and social movement activists also provides an opportunity to measure the drivers of activism and advocacy. In additional exploration of political ideology, some scholars have explored ideology as a multi-dimensional construct (Feldman and Johnston 2014), particularly among the voting public (Layman and Carsey 2002). In a study of conservative voters, Zumbrunnen and Gangl (2008) found that conservatism was defined by two separate dimensions, cultural and economic. Cultural conservativism is identified as identifying strongly with “traditional values” and religious beliefs, while economic conservativism is related to both a free-market orientation and limited government. They found that cultural conservativism was the primary driver in the voter’s self-identified political ideology. Recently, Feldman and Johnston (2014) also showed how political ideology is best understood along two different dimensions, economic and social ideology.

Conclusion

Item Response Theory can be used to measure latent constructs of interest to nonprofit and public policy scholars. In this study, one technique was demonstrated to measure the revealed political ideology of nonprofit leaders, which may be of interest to those who are interested in understanding how the ideology of leaders, elected officials or organizations may affect policy formation, policy implementation or other behaviors. However, this paper also has some limitations. First, the response rate for the survey was lower than desired, and selection bias cannot be ruled out based on the response rate and the fact that some respondents seemed uncomfortable with the political nature of the survey questions. Readers should take this into consideration. Second, it’s important to note that this study compares the political ideology of elected officials with nonprofit leaders only on the eleven selected issues that were voted on in the 2012 California legislature, similar to a model used by Connolly and Mason (2016) and not inconsistent with that used by McKay (2008, 2010). While the approach presented here is simple to operationalize (which might be welcomed by scholars), there are other approaches to estimating the revealed ideology of legislators, such as including all roll call votes cast in a single year or over multiple years. Finally, the makeup of the California legislature changes every two years, issues gain and lose political salience and turnover in nonprofit organizations is not insignificant. It is important to replicate this design among other types of organizations, engaged in different types of activities, in other locations and at other points in time.

While this study demonstrated one way to measure for political ideology, and provided three tests for consistency, survey or observation instruments should continue to be designed, and tested, in ways that limit systematic bias and support both internal and external validity. Ultimately, IRT can be added to the toolbox of nonprofit scholars who are seeking to understand the way that unobservable characteristics of different populations of interest impact organizational behavior.

References

  • Ainsworth, S. 1993. “Regulating Lobbyists and Interest Group Influence.” The Journal of Politics 55(1):41–56. CrossrefGoogle Scholar

  • Ainsworth, S., and I. Sened. 1993. “The Role of Lobbyists: Entrepreneurs with Two Audiences.” American Journal of Political Science 37(3):834–866. CrossrefGoogle Scholar

  • Alford, J. R., C. L. Funk, and J. R. Hibbing. 2005. “Are Political Orientations Genetically Transmitted?.” American Political Science Review 2:153–167. Google Scholar

  • Bafumi, J., and M. Herron. 2010. “Leapfrog Representation and Extremism: A Study of American Voters and Their Members in Congress.” American Political Science Review 104(3):519–542. CrossrefGoogle Scholar

  • Bailey, M. A. 2007. “Comparable Preference Estimates across Time and Institutions for the Court, Congress, and Presidency.” American Journal of Political Science 51(3):433–448. CrossrefGoogle Scholar

  • Baker, F. B. 2001. The Basics of Item Response Theory, 2nd ed. Washington, DC: Office of Educational Research and Improvement. Google Scholar

  • Barber, M. J. 2015. “Ideological Donors, Contribution Limits, and the Polarization of American Legislatures.” The Journal of Politics 78(1):296–310. Google Scholar

  • Bertelli, A. M. 2006. “Motivation Crowding and the Federal Civil Servant: Evidence from the U.S. Internal Revenue Service.” International Public Management Journal 9(1):3–23. CrossrefGoogle Scholar

  • Bertelli, A. M. 2007. “Determinants of Bureaucratic Turnover Intention: Evidence from the Department of the Treasury.” Journal of Public Administration Research and Theory: J-PART 17(2):235–258. Google Scholar

  • Bertelli, A. M. 2012. The Political Economy of Public Sector Governance. New York: Cambridge University Press. Google Scholar

  • Bertelli, A. M., and C. R. Grose. 2009. “Secretaries of Pork: A New Theory of Distributive Public Policy.” Journal of Politics 71(3):926–945. CrossrefGoogle Scholar

  • Bertelli, A. M., and C. R. Grose. 2011. “The Lenghtened Shadow of Another Institution? Ideal Point Estimates for the Executive Branch and Congress.” The American Journal of Political Science 55(4):767–781. CrossrefGoogle Scholar

  • Bertelli, A. M., D. P. Mason, J. Connolly, and D. Gastwirth. 2015. “Measuring Agency Attributes with Attitudes across Time: A Method and Examples Using Large-Scale Federal Surveys.” Journal of Public Administration Research and Theory 2(2):513–544. Google Scholar

  • Bonica, A. 2013. “Ideology and Interests in the Political Marketplace.” American Journal of Political Science 57(2): 294–311. CrossrefGoogle Scholar

  • Chen, J., and T. Johnson. 2015. “Federal Employee Unionization and Presidential Control of the Bureaucracy: Estimating and Explaining Ideological Change in Executive Agencies.” Journal of Theoretical Politics 27(1):151–174. CrossrefGoogle Scholar

  • Clary, E. G., and M. Snyder. 1999. “The Motivations to Volunteer: Theoretical and Practical Considerations.” Current Directions in Psychological Science 8(5):156–159. CrossrefGoogle Scholar

  • Clinton, J. D., A. Bertelli, C. R. Grose, D. E. Lewis, and D. C. Nixon. 2012. “Separated Powers in the United States: The Ideology of Agencies, Presidents, and Congress.” American Journal of Political Science 56(2):341–354. CrossrefGoogle Scholar

  • Clinton, J. D., S. Jackman, and D. Rivers. 2004. “The Statistical Analysis of Roll Call Data.” American Political Science Review 98(2):355–370. CrossrefGoogle Scholar

  • Clinton, J. D., and D. E. Lewis. 2008. “Expert Opinion, Agency Characteristics, and Agency Preferences.” Political Analysis 16(1):3–20. CrossrefGoogle Scholar

  • Clinton, J. D., D. E. Lewis, and J. L. Selin. 2014. “Influencing the Bureaucracy: The Irony of Congressional Oversight.” American Journal of Political Science 58(2):387–401. CrossrefGoogle Scholar

  • Connolly, J., and D. P. Mason. 2016. “Elite Perceptions of Service Reductions at the Local Level.” Political Research Quarterly 69(4):830–841. Google Scholar

  • Cox, G. W., and M. D. McCubbins. 1986. “Electoral Politics as a Redistributive Game.” Journal of Politics 48(2):370–389. CrossrefGoogle Scholar

  • Cox, G. W., and M. D. McCubbins. 2007. Legislative Leviathan: Party Government in the House. New York: Cambridge University Press. Google Scholar

  • Downs, A. 1957. An Economic Theory of Democracy. New York: Harper. Google Scholar

  • Epstein, D., and S. O’Halloran. 1999. Delegating Powers: A Transaction Cost Politics Approach to Policy Making under Separate Powers. Cambridge: Cambridge University Press. Google Scholar

  • Feldman, S., and C. Johnston. 2014. “Understanding the Determinants of Political Ideology: Implications of Structural Complexity.” Political Psychology 35(3):337–358. CrossrefGoogle Scholar

  • Ferejohn, J., and C. R. Shipan. 1990. “Congressional Influence on Bureaucracy.” Journal of Law, Economics, and Organization 6(6):1–21. CrossrefGoogle Scholar

  • Gailmard, S. 2009. “Multiple Principals and Oversight of Bureaucratic Policy-Making.” Journal of Theoretical Politics 21(2):161–186. CrossrefGoogle Scholar

  • Groves, R. M., F. J. Fowler, Jr., M. P. Couper, J. M. Lepkowski, E. Singer, and R. Tourangeau. 2004. Survey Methodology. Hoboken, NJ: Wiley-Interscience. Google Scholar

  • Guttman, L. 1945. “A Basis for Analyzing Test-Retest Reliability.” Psychometrika 23:2987–308. Google Scholar

  • Hotelling, H. 1929. “Stability in Competition.” The Economic Journal 39:41–57. CrossrefGoogle Scholar

  • Ip, E. H. 2010. “Empirically Indistinguishable Multidimensional IRT and Locally Dependent Unidimensional Item Response Models.” British Journal of Mathematical and Statistical Psychology 63(2):395–416. CrossrefGoogle Scholar

  • Jennings, M. K. 1992. “Ideological Thinking among Mass Publics and Political Elites.” The Public Opinion Quarterly 56(4):419–441. CrossrefGoogle Scholar

  • Jost, J. T. 2006. “The End of the End of Ideology.” American Psychologist 61(7):651–670. CrossrefGoogle Scholar

  • Jost, J. T., C. M. Federico, and J. L. Napier. 2009. “Political Ideology: Its Structure, Functions, and Elective Affinities.” Annual Review of Psychology 60:307–337. CrossrefGoogle Scholar

  • Jost, J. T., J. Glaser, A. W. Kruglanski, and F. J. Sulloway. 2003a. “Exceptions that Prove the Rule – Using a Theory of Motivated Social Cognition to Account for Ideological Incongruities and Political Anomalies: Reply to Greenberg and Jonas.” Psychological Bulletin 129(3):383–393. CrossrefGoogle Scholar

  • Jost, J. T., J. Glaser, A. W. Kruglanski, and F. J. Sulloway. 2003b. “Political Conservatism as Motivated Social Cognition.” Psychological Bulletin 129(3):339–375. CrossrefGoogle Scholar

  • Krehbiel, K. 1996. “Institutional and Partisan Sources of Gridlock: A Theory of Divided and Unified Government.” Journal of Theoretical Politics 8(1):7–40. CrossrefGoogle Scholar

  • Krehbiel, K. 2010. Pivotal Politics: A Theory of U.S. Lawmaking. Chicago: University of Chicago Press. Google Scholar

  • Layman, G. C., and T. M. Carsey. 2002. “Party Polarization and ‘Conflict Extension’ in the American Electorate.” American Journal of Political Science 46(4):786. CrossrefGoogle Scholar

  • Lin, W., and G. G. Van Ryzin. 2012. “Web and Mail Surveys an Experimental Comparison of Methods for Nonprofit Research.” Nonprofit and Voluntary Sector Quarterly 41(6):1014–1028. CrossrefGoogle Scholar

  • Lord, F. M., and M. R. Novick. 1968. Statistical Theories of Mental Test Scores. Reading, MA: Addison-Welsley. Google Scholar

  • Martin, A. D., and K. M. Quinn. 2002. “Dynamic Ideal Point Estimation via Markov Chain Monte Carlo for the U.S. Supreme Court, 1953–1999.” Political Analysis 10(2):134–153. CrossrefGoogle Scholar

  • Martin, A. D., K. M. Quinn, and J. H. Park. 2011. “Mcmcpack: Markov Chain Monte Carlo in R.” Journal of Statistical Software 42(9):1–29. Google Scholar

  • Mason, D. P. 2015. “Advocacy in Nonprofit Organizations: A Leadership Perspective.” Nonprofit Policy Forum 6(3):297–324. Google Scholar

  • McCarty, N., K. T. Poole, and H. Rosenthal. 2005. “Polarized America: The Dance of Ideology and Unequal Riches,” August. http://escholarship.org/uc/item/1zz8f29d#page-3 

  • McCubbins, M. D., R. G. Noll, and B. R. Weingast. 1987. “Administrative Procedures as Instruments of Political Control.” Journal of Law, Economics, and Organization 3(2):243. Google Scholar

  • McKay, A. 2008. “A Simple Way of Estimate Interest Group Ideology.” Public Choice 136(1/2):69–86. CrossrefGoogle Scholar

  • McKay, A. 2010. “The Effects of Interest Groups’ Ideology on Their PAC and Lobbying Expenditures.” Business and Politics 12(2):1–21. Google Scholar

  • Moe, T. M. 1987. “An Assessment of the Positive Theory of ‘Congressional Dominance.’.” Legislative Studies Quarterly 12(4):475–520. CrossrefGoogle Scholar

  • Moe, T. M. 1988. The Organization of Interests: Incentives and the Internal Dynamics of Political Interest Groups. Chicago: University of Chicago Press. Google Scholar

  • Moe, T. M. 1991. “Politics and the Theory of Organization.” Journal of Law, Economics, and Organization 7(Special Issue):106–189. CrossrefGoogle Scholar

  • Penner, L. A. 2002. “Dispositional and Organizational Influences on Sustained Volunteerism: An Interactionist Perspective.” Journal of Social Issues 58(3):447–467. CrossrefGoogle Scholar

  • Pinard, M. 2011. Motivational Dimensions in Social Movements and Contentious Collective Action. Montreal: McGill-Queen’s Press. Google Scholar

  • Poole, K. T. 2005. Spatial Models of Parliamentary Voting. New York: Cambridge University Press. Google Scholar

  • Poole, K. T., and H. Rosenthal. 1985. “A Spatial Model for Legislative Roll Call Analysis.” American Journal of Political Science 29(2):357–384. CrossrefGoogle Scholar

  • Poole, K. T., and H. Rosenthal. 1997. Congress: A Political-Economic Theory of Roll Call Voting. New York: Oxford University Press. Google Scholar

  • Poole, K. T., and H. L. Rosenthal. 2007. Ideology and Congress. New Brunswick, NJ: Transaction Publishers. Google Scholar

  • Quinn, K. M. 2004. “Bayesian Factor Analysis for Mixed Ordinal and Continuous Responses.” Political Analysis 12:338–353. CrossrefGoogle Scholar

  • Rusch, T., P. B. Lowry, P. Mair, and H. Treiblmaier. 2016. “Breaking Free from the Limitations of Classical Test Theory: Developing and Measuring Information Systems Scales Using Item Response Theory.” Information & Management 52(2) 189–201. Accessed November 29. Google Scholar

  • Shepsle, K. A. 1986. “The Positive Theory of Legislative Institutions: An Enrichment of Social Choice and Spatial Models.” Public Choice 50(1–3):135–178. CrossrefGoogle Scholar

  • Shor, B., and N. McCarty. 2011. “The Ideological Mapping of American Legislatures.” American Political Science Review 105(3):530–551. CrossrefGoogle Scholar

  • Treier, S., and S. Jackman. 2002. “Beyond Factor Analysis: Modern Tools for Social Measurement.” Unpublished Manuscript. Annual Meeting of the Western Political Science Association. 

  • Truman, D. B. 1951. The Governmental Process: Political Interests and Public Opinion. New York: Knopf. Google Scholar

  • Van Der Linden, W. J., and R. K. Hambleton. 1997. Handbook of Modern Item Response Theory. New York: Springer Science & Business Media. Google Scholar

  • Zumbrunnen, J., and A. Gangl. 2008. “Conflict, Fusion, or Coexistence? the Complexity of Contemporary American Conservatism.” Political Behavior 30(2):199–221. CrossrefGoogle Scholar

About the article

Published Online: 2017-03-22

Published in Print: 2019-10-25


Citation Information: Nonprofit Policy Forum, Volume 8, Issue 1, Pages 91–110, ISSN (Online) 2154-3348, ISSN (Print) 2194-6035, DOI: https://doi.org/10.1515/npf-2016-0020.

Export Citation

© 2017 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in