Abstract
We present experimental evidence from a correspondence test of racial discrimination in the labor market for recent college graduates. We find strong evidence of differential treatment by race: black applicants receive approximately 14% fewer interview requests than their otherwise identical white counterparts. The racial gap in employment opportunities is larger when comparisons are made between job seekers with credentials that proxy for expected productivity and/or match quality. Moreover, the racial discrimination detected is driven by greater discrimination in jobs that require customer interaction. Various tests for the type of discrimination tend to support taste-based discrimination, but we are unable to rule out risk aversion on the part of employers as a possible explanation.
1 Introduction
College graduates who entered the labor market during and following the Great Recession experienced high rates of unemployment and underemployment (Abel, Deitz, and Su 2014; Spreen 2013). The labor-market opportunities, while grim for those who completed their degrees during this time period, were worse for blacks. Spreen (2013, table 7) reports unemployment rates for recent college graduates that differ substantially between whites and blacks (10.6% for whites and 20.2% for blacks). Research on the impact of recessions on demographic groups indicates that blacks are disproportionately affected (Hoynes, Miller, and Schaller 2012). [1] In addition, anecdotal evidence suggests a higher degree of selectivity in hiring on the part of employers during and following the Great Recession. [2] Given the higher rates of unemployment for blacks relative to whites, it could be the case that employers are selective on the basis of race. We use data from a randomized résumé-audit study to examine racial discrimination in the labor market for college graduates who completed their degrees during the worst employment crisis since the Great Depression.
Discrimination against minority job seekers is a worldwide phenomenon that has been documented in experimental studies of the labor market (Baert et al. 2013; Bertrand and Mullainathan 2004; Booth, Leigh, and Varganova 2012; Carlsson and Rooth 2007; Oreopoulos 2011). The most common experimental design in this literature combines random assignment of perceived productivity and other résumé characteristics with popular first and last/family names that signal race to identify discrimination (e.g., Bertrand and Mullainathan 2004). However, it has proven conceptually difficult to determine whether discrimination is taste-based (i.e., employers have racist preferences) or statistical (i.e., imperfect information causes employers to update their beliefs about future productivity, which may be correlated with race, when confronted with racial-sounding names). Our primary objective is to determine the extent to which racial discrimination can explain the (un)employment gap between white and black college graduates. If discrimination cannot be ruled out, a secondary objective is to determine whether the source of the discrimination is based on tastes or imperfect information.
If the (un)employment differentials between blacks and whites are large early in their careers, employers may have different beliefs about the quality of experience of white and black workers later in their careers, which could complicate an analysis of racial discrimination. As such, we focus on the employment prospects facing recent college graduates in the context of a résumé-audit experiment in which the races of job applicants are signaled with white- and black-sounding names. Approximately 9,400 randomly generated résumés from fictitious, recently graduated job seekers were submitted to online job advertisements from January 2013 through the end of July 2013. All applicants were assigned a college graduation date of May 2010. [3]
The high rates of unemployment and underemployment experienced by people who graduated with a Bachelor’s degree in the aftermath of the Great Recession are incorporated into our experiment. In particular, we randomize the timing of gaps in work history and indicate both current and past unemployment spells of different durations. Adequate employment and underemployment are simulated by including two types of work experience: (i) in-field experience that requires a college degree and (ii) out-of-field experience that does not require a college degree. The latter is a proxy for underemployment, i.e., employment in a job that is below one’s skill level.
We attempt to further differentiate between statistical and taste-based discrimination, which could arise from perceived differences in the quality of training and/or job-skill match, by assigning approximately half of the applicants traditional business degrees (i.e., accounting, economics, finance, marketing, and management), while the other applicants were assigned degrees from the arts and sciences (i.e., biology, English, history, and psychology). Additionally, we randomly assigned in-field internships to provide another source of experience that is gained before the applicant enters the job market. We then responded to job advertisements exclusively from the business sector (i.e., banking, finance, insurance, management, marketing, and sales) so that we are able to examine how mismatches in qualifications might affect the racial gap in employment opportunities.
Our experimental data indicate that black-named job seekers are approximately 14% less likely to receive interview requests than applicants with white-sounding names. The racial gap in interview rates increases substantially with credentials that proxy high expected productivity and/or match quality (i.e., business degrees, internship experience, and in-field work experience). Moreover, the baseline estimate for the black–white differential in interview rates is driven primarily by greater discrimination in jobs that require customer interaction. We find no evidence that the uniqueness of the racially identifying names, socioeconomic status, gaps in work history, labor-market conditions, or greater racial discrimination against women are the driving forces behind the estimated black–white differentials in interview rates.
Although we are unable to identify conclusively the channel through which the observed racial gap in interview rates arises, our econometric analysis points toward taste-based discrimination as the most likely explanation. First, we find that the racial gap in employment opportunities widens with attributes that indicate high expected productivity and/or a high degree of match quality between the applicant and the firm. Second, the racial gap in employment opportunities is nonexistent at the lowest skill level (i.e., among applicants with non-business degrees, no internship experience, and out-of-field work experience). These patterns in the data are in line with the hypotheses put forward by Ewens, Tomlin, and Wang (2014) as tests for taste-based discrimination. In addition, estimates based on the methodology proposed by Neumark (2012), which decomposes discrimination into level and variance components, suggest that our baseline model understates the extent of taste-based discrimination. Thus, the bulk of the empirical evidence supports taste-based discrimination as the most likely interpretation for our findings, but we cannot rule out risk aversion on the part of employers as a possible explanation for the racial gap in interview rates.
2 Empirical and theoretical background
Earlier studies in the discrimination literature primarily rely on regression analysis of survey data to test for the presence and type of discrimination. For the most part, these studies find lower wages and poorer job opportunities for blacks (Altonji and Blank 1999). Regression-based studies on racial discrimination have been criticized, as the estimates are sensitive to the data set used and choice of control variables (Riach and Rich 2002). The inability to control for unobserved differences between blacks and whites makes it difficult to test reliably for the presence of racial discrimination as well as the channel through which discrimination operates. [4]
Experimental design can circumvent many of the estimation problems associated with survey data. Laboratory experiments have successfully isolated particular channels through which discrimination occurs. Ball et al. (2001) find evidence of in-group bias; Glaeser et al. (2000) find that trust and trustworthiness are important determinants of discrimination; and Fershtman and Gneezy (2001) find evidence of statistical discrimination. [5] However, the ability of researchers to extrapolate the results of laboratory experiments to “real-world” situations has been questioned (Levitt and List 2007). Field experiments provide a useful alternative to laboratory experiments because they take place in naturally occurring environments and, much like laboratory experiments, provide substantial control over the variables of interest. [6]
Two types of field experiments are primarily used to study racial discrimination in the labor market: in-person and correspondence audits. For the in-person audits, white and black “actors” are recruited and trained to navigate the interview process as if they are perfect substitutes. Such studies have been criticized because of the fragility of the estimates to different assumptions regarding unobservables (Heckman 1998; Heckman and Siegelman 1993). In addition, the “actors” in the experiments are aware of the goals of the experiment, which has the potential to influence their behavior and produce misleading results. Correspondence audits, which send résumés instead of actual people to apply for jobs, offer advantages over in-person audits because researchers can make members of particular groups appear identical to employers in every respect other than the variable(s) of interest (e.g., race) via careful matching of applicant characteristics or randomization (Bertrand and Mullainathan 2004; Lahey 2008). [7] Correspondence studies are void of so-called experimenter effects, as the subjects (i.e., employers) are unaware that they are part of an experiment and the job seekers are fictitious. Because employers are unaware that they are the subjects of an experiment, correspondence tests likely elicit the behavior employers exhibit in actual hiring decisions.
The most relevant study for our purpose is Bertrand and Mullainathan (2004), who examine racial discrimination in the U.S. with a correspondence methodology that incorporates racially distinct names to signal race to prospective employers. They find that black applicants receive about 50% fewer callbacks/interviews than their white counterparts. As in most studies of discrimination, Bertrand and Mullainathan (2004) relate their findings to existing theories. Neither taste-based nor statistical discrimination models convincingly explain their results. They argue that lexicographic search by employers, in which employers examine an applicant’s name and look no further, could explain the lower return to credentials that are detected for black applicants.
Our study differs from Bertrand and Mullainathan (2004) in several ways. First, we focus on recent college graduates who entered the labor market during the worst employment crisis since the Great Depression. It is important to understand how discrimination might inhibit skilled workers (i.e., the college-educated) early in their careers, as such discrimination could have important policy implications. Second, we create fictitious job seekers with short work histories, as the use of applicants with lengthy work histories complicates tests for the type of discrimination. A sample of job seekers with short work histories and randomly assigned “hard” skills (i.e., in-field and internship experience) provides a cleaner test for the type of racial discrimination recently graduated job seekers might encounter. Third, we use a recent econometric technique (i.e., Neumark 2012) and testable hypotheses posited by Ewens, Tomlin, and Wang (2014) to aid in sorting out the explanation for the observed black–white differentials. Fourth, we test whether there is greater (or less) discrimination in jobs that require substantial interaction with customers as a means to examine a particular aspect of Becker’s (1971) theory. Fifth, we test whether the extent of racial discrimination is greater (or smaller) in relatively “tight” and “loose” labor markets. Sixth, Bertrand and Mullainathan (2004) focus on administrative- and clerical-type jobs, which results in a focus on racial differences between women rather than racial discrimination against men and women. We apply to a wide range of jobs across six different business categories, which allows us to study racial discrimination within and between sexes across a wider range of occupations.
Neumark (2012) contends that correspondence studies are likely to address complications associated with mean differences in unobservables between blacks and whites. However, both in-person and correspondence audits share the common limitation that the perceived variance of unobserved characteristics may differ between members of particular groups. Unequal variances of the unobserved determinants of the outcome variable can lead to spurious evidence in favor or against discrimination (Heckman 1998; Heckman and Siegelman 1993). As a result, differentiating between theories based on tastes (Becker 1971) or imperfect information (Aigner and Cain 1977; Arrow 1973; Cornell and Welch 1996; Lundberg and Startz 1983; Phelps 1972) is equally difficult in both the in-person and correspondence audits. However, correspondence studies are likely to identify what the law considers discrimination, which is effectively the sum of taste-based and statistical discrimination (Neumark 2012). We use two different approaches to test for different types of discrimination: one used by Bertrand and Mullainathan (2004) and Lahey (2009), which relies on race–credential interactions, and another advanced by Neumark (2012), which decomposes discrimination into “level” and “variance” components. [8]
Ewens, Tomlin, and Wang (2014) posit four testable hypotheses for taste-based discrimination in the context of an output market. We believe three of these hypotheses are testable, albeit imperfectly, with our data. Hypotheses 2A and 3A are of particular interest:
Hypothesis 2A: On average, the response gap between white and black applicants when a positive signal is sent is larger than the response gap between white and black applicants when a negative signal is sent. (p. 125)
Hypothesis 3A: On average, negative information will unambiguously narrow the racial gap observed in the no-signal base case, but positive information will unambiguously widen the racial gap observed in the base case. (p. 125)
While the “no-signal” base case is not possible in a résumé-audit study, which is an advantage of Ewens, Tomlin, and Wang’s (2014) reliance on rental-housing markets, we include substantial variation in the perceived productivity characteristics of the résumés, which allows us to use the framework developed by Ewens, Tomlin, and Wang (2014) to help identify the channel through which racial discrimination operates. In particular, we investigate Hypothesis 2A by testing whether the racial gap in employment opportunities increases when comparisons are made between black and white job seekers with positive attributes (i.e., business degrees, internship experience, and in-field work experience). Moreover, we investigate Hypothesis 3A by comparing black and white applicants with the lowest skill level (i.e., those with non-business degrees, no internship experience, and out-of-field work experience).
3 Experimental design
We submitted approximately 9,400 randomly created résumés to online advertisements for job openings across multiple job categories in seven large cities across the U.S. [9] The job categories are banking, finance, management, marketing, insurance, and sales, and the cities are Atlanta, GA, Baltimore, MD, Boston, MA, Dallas, TX, Los Angeles, CA, Minneapolis, MN, and Portland, OR. The submission of résumés took place from January 2013 through the end of July 2013.
For each job advertisement, we submitted four résumés. The four résumés are randomly assigned a number of different characteristics, which are generated using the computer program developed by Lahey and Beasley (2009). We chose eight applicant names for our study. Four of the names are distinctively female, while the remaining four names are distinctively male. In both the male and female categories, two of the names are “distinctively white,” while the other two names are “distinctively black.” The distinctively white female names are Claire Kruger and Amy Rasmussen, and the distinctively black female names are Ebony Booker and Aaliyah Jackson. The distinctively white male names are Cody Baker and Jake Kelly, and the distinctively black male names are DeShawn Jefferson and DeAndre Washington. [10] Each of the first and family names ranks at or near the top of the “whitest” and “blackest” names in the U.S. We use the racial distinctiveness of the applicants’ names to signal race to prospective employers. [11]
Our fictitious applicants graduated with a Bachelor’s degree in May 2010. We randomly assign each applicant a name (one of the eight listed above), a street address, a university where their Bachelor’s degree was completed, academic major, (un)employment statuses, [12] whether they report their grade point average (GPA) on their résumé, whether the applicant completed their Bachelor’s degree with an Honor’s distinction, whether the applicant has work experience specific to the job category for which they are applying, and whether the applicant worked as an intern while completing their Bachelor’s degree. Each of these randomized résumé characteristics is coded as zero–one indicator variables. [13]
While much of the experimental design is produced via randomization, there are some features of the experiment that are held constant. First, we assigned a Bachelor’s degree to each of our fictitious résumés. The assignment of only Bachelor’s degrees is driven by our interest in the labor-market opportunities facing college graduates, particularly those who graduated during the worst employment crisis since the Great Depression. Second, we only applied to jobs in business-related fields: banking, finance, insurance, marketing, management, and sales. We submit applications to job categories which are associated with business degrees/experience in order to examine mismatch in qualifications between black and white applicants. Third, we applied to jobs that met the following criteria: (i) no certificate or specific training was required for the job; (ii) the prospective employer did not require a detailed application be submitted; (iii) and the prospective employer only required the submission of a résumé to be considered for the job. The decision to apply for jobs that did not require detailed application procedures is driven by the need to (a) avoid introducing unwanted variation into the experimental design and (b) maximize the number of résumés submitted at the lowest possible cost. The only decision that was made on our part that could affect the estimates is the selection of the jobs to which applications were submitted. That is, there may be unobserved heterogeneity at the job level. Because we sent four résumés to each job opening, this potential source of bias is mitigated by the inclusion of job-advertisement dummy variables, which holds constant unobservables specific to all four résumés. In addition, we cluster standard errors at the job-advertisement level, which follows other correspondence studies (e.g., Lahey 2008; Neumark 2012).
Because we use randomization to examine the effects of race on employment prospects, it is important to ensure that our randomization process distributes the résumé attributes to black and white applicants in similar ways. Table 1 presents the means for a subset of the résumé characteristics for all applicants (column 1), black applicants (column 2), and white applicants (column 3). [14] In column 4, the p-values for the difference-in-means tests between black and white applicants for each résumé attribute are presented. [15] It is apparent from the difference-in-means tests that black and white applicants are assigned each of the résumé characteristics similarly, as none of the estimated differentials is statistically different from zero. In addition, the sample means for the résumé characteristics overall and by race are consistent with the probabilities chosen for the random assignment of the résumé credentials (see Online Appendix A1.1 for information on these probabilities).
Covariate balance between black and white applicants.
Covariate | All applicants | Black applicants | White applicants | p-Values forblack–white differences |
Female | 0.499 | 0.494 | 0.504 | 0.331 |
High socioeconomic status | 0.499 | 0.498 | 0.501 | 0.804 |
No gap in work history | 0.254 | 0.254 | 0.256 | 0.782 |
3-Month front-end gap | 0.125 | 0.120 | 0.129 | 0.180 |
6-Month front-end gap | 0.121 | 0.121 | 0.121 | 0.918 |
12-Month front-end gap | 0.125 | 0.128 | 0.122 | 0.383 |
3-Month back end gap | 0.124 | 0.124 | 0.124 | 0.949 |
6-Month back end gap | 0.123 | 0.124 | 0.122 | 0.756 |
12-Month back end gap | 0.127 | 0.129 | 0.125 | 0.493 |
Business degree | 0.552 | 0.551 | 0.553 | 0.907 |
Internship experience | 0.248 | 0.248 | 0.249 | 0.866 |
In-field work experience | 0.501 | 0.498 | 0.502 | 0.696 |
We proxy employment opportunities with interview requests from prospective employers. A response is treated as an interview request when an employer calls or e-mails to schedule an interview or requests to speak in more detail about the opening with the applicant. Our measure of employment prospects, i.e., the interview rate, is similar to the measures commonly used in other correspondence studies (e.g., Bertrand and Mullainathan 2004). It is possible for us to consider “positive” responses (e.g., Lahey 2008), but the estimates are not sensitive to this alternative coding of the dependent variable because the majority of “callbacks” fall into the interview-request category. [16] As a result, we omit these results from the paper.
Average interview rates.
All (1) | White (2) | Black (3) | Difference in means (4) | |
Overall | 0.166 | 0.180 | 0.152 | −0.028*** |
By city | ||||
Atlanta | 0.131 | 0.148 | 0.114 | −0.034** |
Baltimore | 0.257 | 0.254 | 0.248 | −0.006 |
Boston | 0.130 | 0.144 | 0.116 | −0.028 |
Dallas | 0.180 | 0.199 | 0.161 | −0.038** |
Los Angeles | 0.138 | 0.157 | 0.119 | −0.037** |
Minneapolis | 0.181 | 0.200 | 0.163 | −0.037** |
Portland | 0.160 | 0.169 | 0.152 | −0.017 |
By job category | ||||
Banking | 0.090 | 0.112 | 0.070 | −0.042** |
Finance | 0.102 | 0.110 | 0.094 | −0.015 |
Insurance | 0.243 | 0.276 | 0.210 | −0.065*** |
Management | 0.103 | 0.107 | 0.099 | −0.007 |
Marketing | 0.214 | 0.218 | 0.209 | −0.008 |
Sales | 0.215 | 0.233 | 0.195 | −0.038** |
Table 2 presents summary statistics for the interview rates overall and by race. The baseline interview rate in the sample is slightly over 16%, with white applicants having a higher-than-average interview rate and black applicants having a lower-than-average interview rate. The unconditional difference in the interview rates between black and white applicants is approximately 2.7 percentage points, which is statistically significant at the 1% level. The interview rates vary across cities. Atlanta and Boston have the lowest overall interview rates at about 13%, while Baltimore has the highest interview rate at about 25%. When the city-specific interview rates are separated by race, we observe lower interview rates for blacks relative to whites across all cities. The majority of the unconditional city-specific differences in the interview rates between black and white applicants are statistically significant at conventional levels. There is also variation in the interview rates by job category. Insurance, marketing, and sales have the highest interview rates, which are each in excess of 20%. Banking, finance, and management have the lowest interview rates, which are around 10% or slightly less. The interview rates for black applicants are lower, in some cases substantially, than their white counterparts for each of the job categories. The unconditional differences in the interview rates between black and white applicants are statistically significant at conventional levels for most of the job categories. While the racial differences in interview rates presented in Table 2 are suggestive of differential treatment by race, a formal analysis is required to determine whether these differences reflect discrimination and, if so, the type of discrimination observed.
4 Results
4.1 Baseline estimates
Our baseline regression model is
The subscripts i, m, c, f, and j index applicants, months, cities, job categories, and job advertisements, respectively. The variable interview is a zero–one indicator variable that equals one when an applicant receives a request for an interview and zero otherwise; black is a zero–one indicator variable that equals one when the name of the applicant is distinctively black and zero when the name of the applicant is distinctively white;
The use of randomization ensures that the race identifier (black) in eq. [1] is orthogonal to the error term (u), allowing us to interpret the parameter attached to the race identifier as the causal difference in the interview rate between black and white applicants. However, the estimate for
Race and job opportunities.
(1) | (2) | (3) | (4) | (5) | (6) | |
Black | −0.028*** | −0.027*** | −0.027*** | −0.027*** | −0.026*** | −0.022*** |
(0.007) | (0.007) | (0.007) | (0.007) | (0.007) | (0.006) | |
Controls: | ||||||
Résumé | No | Yes | Yes | Yes | Yes | Yes |
Month | No | No | Yes | Yes | Yes | Yes |
City | No | No | No | Yes | Yes | Yes |
Category | No | No | No | No | Yes | Yes |
Advertisement | No | No | No | No | No | Yes |
0.002 | 0.008 | 0.010 | 0.018 | 0.044 | 0.724 | |
Adjusted | 0.001 | 0.005 | 0.006 | 0.014 | 0.039 | 0.630 |
Observations | 9396 | 9396 | 9396 | 9396 | 9396 | 9396 |
Table 3 presents estimates for the parameter
For the comparisons between black and white applicants, the estimated black–white differentials in interview rates range from −0.022 to −0.028 percentage points. The most reliable estimate is likely the one shown in column (6), which includes the complete set of control variables (i.e.,
4.2 Sensitivity checks
Our first sensitivity check examines whether the interview rates differ by race and sex. [19]Table 4 presents these estimates. Columns (1) and (2) provide within-sex comparisons, and columns (3) and (4) provide between-sex comparisons. Each of the estimate interview differentials is negative and statistically significant at conventional levels. Ultimately, these tests reveal that black men and black women experience similar treatment in the labor market in terms of interview rates, as both have lower interview rates than white men and white women. The magnitudes of estimated differences vary somewhat, but statistical tests indicate that the black–white male differential is not statistically different from the black–white female differential. Given that there is no statistical evidence of race–sex differences in interview rates, the remainder of sensitivity checks focuses on racial differences in lieu of race–sex differences. [20]
Race–sex interactions and interview rates.
Black men versus white men (1) | Black women versus white women (2) | Black men versus white women (3) | Black women versus white men (4) | |
Difference in the interview rate | −0.019** | −0.025*** | −0.027*** | −0.016** |
(0.008) | (0.009) | (0.009) | (0.008) |
Although the use of racially distinct names as a signal of race is not a perfect substitute for the random assignment of race, it is perhaps the best approach advanced in the literature in recent years. However, the use of racially distinct names introduces potential confounds. For example, Charles and Guryan (2011) argue that employers could view distinctively black names as unique or odd, and discriminate based on those perceptions. Such differential treatment would be discrimination, but it would not be racial in nature.
We use the Social Security Administration’s data on baby names to examine the popularity of our first names for the black and white applicants. While the rankings change from year to year, we examine the rankings (in terms of popularity) of the chosen first names to obtain a sense of how common or uncommon the first names are for babies born in the late 1980s and early 1990s, which is approximately when our applicants would have been born. For the white names, Amy is ranked about 50th; Claire is ranked about 150th; Cody is ranked about 40th; and Jake is ranked about 140th. For the black names, Ebony is ranked about 160th; Aaliyah is ranked about 200th; DeAndre is ranked about 250th; and DeShawn is ranked about 450th. While the distinctively black names are less frequent, it is important to point that these rankings are based on popular male and female names overall, not by race. In addition, data on last/family names from the U.S. Census suggest that the “white” and “black” last/family names chosen are likely to reinforce the racial distinctiveness of the first names.
A second criticism of using racially distinct names is that they may signal socioeconomic status instead of race. We incorporate socioeconomic status into our experimental design by randomly assigning street addresses in neighborhoods with high and low house prices. The indicator for high socioeconomic status is a street address with house prices of $750,000 or more, while the indicator for low socioeconomic status is a street address with house prices that are $100,000 or less.
While there is no clear-cut way to deflect concerns that the racially distinct names reflect race in lieu of uniqueness or socioeconomic status, we use two approaches to address these concerns. First, we examine a subset of the full sample that excludes the most popular and least popular first names from the sample. The names with the highest rankings are Amy and Cody, and the name with the lowest ranking is DeShawn. Excluding observations from applicants with these names effectively results in a sample of applicants with names that have similar frequency in the population. We address the socioeconomic-status concern by estimating racial differences in interview rates for applicants with street addresses in high- and low-socioeconomic-status neighborhoods, which is similar to the strategy used by Bertrand and Mullainathan (2004).
The sensitivity checks focused on the uniqueness and socioeconomic status of the racially distinct names are presented in Table 5. Column (1) shows the estimated difference in the interview rate between black and white applicants with common names; columns (2) and (3) present the estimated differences in the interview rates between black and white applicants with low-socioeconomic-status addresses; and columns (4) and (5) present the estimated differences in the interview rates between black and white applicants with high-socioeconomic-status addresses. Columns (2) and (3) and columns (4) and (5) differ based on the sample that is used, as columns (2) and (4) use the full sample and columns (3) and (5) use the subsample based on applicants with common names. In column (1), the estimates indicate that black applicants have a 2.7 percentage point lower interview rate than otherwise identical white applicants, and this estimated differential is statistically significant at the 1% level. The estimates for applicants with low-socioeconomic-status street addresses range from −0.022 to −0.029, which varies depending on the sample used. Each of these estimates is statistically significant at the 5% level. The estimates for applicants with high-socioeconomic-status street addresses range from −0.021 to −0.023. The former estimate is statistically significant at the 5% level, while the latter estimate is statistically significant at the 10% level. The coefficient estimate for the interaction term, which tests whether the estimated black–white differential for applicants with high-socioeconomic-status addresses and that for applicants with low-socioeconomic-status addresses are statistically different from one another, is omitted from Table 5, but the estimate is small economically and not statistically different from zero. To the extent the subset of names analyzed are truly common, which is supported by name data, and the measure that we use indicates socioeconomic status reliably, our results in Table 3 do not appear to reflect differential treatment based on the uniqueness of the applicant’s first and last names or socioeconomic status, which increases the likelihood that our estimates reflect differential treatment by race.
Race, uniqueness, and socioeconomic status.
Common names (1) | Low socioeconomic status | High socioeconomic status | |||
Full sample (2) | Common names (3) | Full sample (4) | Common names (5) | ||
Black | −0.027*** | −0.022** | −0.029** | −0.021** | −0.023* |
(0.009) | (0.009) | (0.014) | (0.008) | (0.014) | |
Observations | 5,811 | 9,396 | 5,811 | 9,396 | 5,811 |
Because we randomized gaps in the work histories of applicants, it is possible that the black–white differentials detected in Table 3 could be driven by lower interview rates for blacks with unemployment spells. To investigate this possibility, we estimate a variant of eq. [1] that includes interactions between the race identifier and unemployment-spell identifiers. The estimates presented in Table 6 test whether unemployment spells affect blacks more or less adversely than their white counterparts. The estimates shown in Table 6 indicate that the black–white differentials detected in Table 3 are not driven by greater discrimination against blacks with current unemployment spells. None of the estimates are statistically significant at any reasonable level, nor is it likely that the estimated differentials would be considered economically important. [21]
The next set of sensitivity checks examines whether racial discrimination varies with labor-market conditions. Because of the historically high rates of unemployment in each city studied during our sample period, the labor-market conditions present in each city would likely be considered “slack” or “loose.” However, there is variation in the levels of unemployment in these cities in which résumés were submitted. The cities with relatively lower unemployment rates include Boston, Dallas, and Minneapolis, which had unemployment rates ranging from 5% to 6%. The cities with relatively higher unemployment rates include Baltimore and Los Angeles, which had unemployment rates over 10%. Atlanta and Portland had unemployment rates in between these two extremes, which were about 7%. In our analysis, we consider Boston, Dallas, and Minneapolis as cities with relatively “tight” labor-market conditions, and Baltimore and Los Angeles are treated as having relatively “loose” labor-market conditions. The interview rate in cities with “tight” conditions is about 16%, while it is over 19% in cities with “loose” conditions.
Race, unemployment spells, and job opportunities.
(1) | (2) | (3) | (4) | (5) | (6) | |
Black | −0.008 | −0.011 | −0.002 | 0.019 | 0.006 | −0.013 |
(0.019) | (0.021) | (0.020) | (0.026) | (0.024) | (0.026) |
Table 7 presents the estimates for the black–white interview differential in cities with relatively tight labor-market conditions (column 1), the black–white interview differential in cities with relatively loose labor-market conditions (column 2), and the relative black–white differential between cities with loose and tight labor-market conditions. From Table 7, black applicants have interview rates that are 3.1 (column 1) and 2.2 (column 2) percentage points lower than their white counterparts in tight and loose labor markets, respectively. The test for whether these two differences are statistically different from one another indicates that the black–white differential in the relatively loose labor markets is not statistically different from that in the relatively tight labor markets (column 3).
4.3 Empirical tests for different types of discrimination
In general, there are two economic models of discrimination: the taste-based model (Becker 1971) and models of statistical discrimination (Aigner and Cain 1977; Arrow 1973; Cornell and Welch 1996; Lundberg and Startz 1983; Phelps 1972). [22] The key difference between these different models is that the taste-based model emphasizes animosity as the source of differential treatment by race, and models of statistical discrimination are based on incomplete information. Becker’s (1971) model predicts that racist employers would interview fewer black applicants than white applicants, despite both having the same productivity characteristics. [23] Models of statistical discrimination can be separated into three classes: (i) those that emphasize differences in the means of unobservables between blacks and whites; [24] (ii) those that emphasize differences in the variances of unobservables between blacks and whites; and (iii) those that emphasize risk aversion on the part of employers. While there are no definitive tests to isolate the type of discrimination observed, we rely on two approaches to help sort out the competing explanations for the observed patterns in the data: race–credential interactions (Bertrand and Mullainathan 2004; Lahey 2008) and the decomposition of racial discrimination into “level” and “variance” components (Neumark 2012).
The first set of empirical tests uses the following regression equation to examine how race interacts with different productivity/match-quality indicators:
The subscripts i, m, c, f, and j and the variables black,
Table 8 presents the estimates for
Race, productivity signals, and job opportunities.
No productivity signal (1) | Productivity signal (2) | Productivity signal relative to no productivity signal (3) | |
Panel A: business degrees | |||
Black | −0.010 | −0.031*** | −0.021* |
(0.009) | (0.008) | (0.012) | |
Panel B: internships | |||
Black | −0.016** | −0.040*** | −0.024* |
(0.008) | (0.013) | (0.015) | |
Panel C: in-field experience | |||
Black | −0.008 | −0.035*** | −0.027** |
(0.008) | (0.009) | (0.013) |
Panel B presents the estimates for the racial gap in employment opportunities for applicants with and without internship experience. In our case, internship experience is a type of in-field work experience, as the applicants were assigned an internship within the job category for which they are applying. [28] Internship experience is working as a(n) “Equity Capital Markets Intern” in banking; “Financial Analyst Intern” in finance; “Insurance Intern” in insurance; “Project Management Intern” or “Management Intern” in management; “Marketing Business Analyst” in marketing; and “Sales Intern” or “Sales Future Leader Intern” in sales. For applicants without internship experience, black applicants have a 1.6 percentage point lower interview rate than white applicants (column 1). The analogous differential is more than twice as large for applicants with internship experience (column 2). The larger racial gap detected for applicants with internship experience is economically larger than the analogous estimated differential for applicants without internship experience. In particular, the racial gap in interview rates for applicants with internship experience is 2.4 percentage points larger than that for applicants without internship experience (column 3). The estimates presented in columns (1), (2), and (3) are statistically significant at the 5%, 1% and 10% levels, respectively.
Panel C presents the estimates for the racial gap in employment opportunities for applicants with and without in-field work experience. In-field work experience varies by the job category: it is working as a “Bank Branch Assistant Manager” in banking; “Accounts Payable” or “Financial Advisor” in finance; “Insurance Sales Agent” in insurance; “Distribution Assistant Manager” or “Administrative Assistant” in management; “Marketing Specialist” in marketing; and “Sales Representative” or “Sales Consultant” in sales. Out-of-field experience is employment at well-known retail stores with either a “Retail Associate” or “Sales Associate” job title. [29] The “out-of-field” experience that is randomly assigned to applicants is effectively a form of “underemployment,” as a college degree would not be required for these types of jobs. For applicants with out-of-field experience, we find no statistical evidence of a differential in the interview rates between black and white applicants (column 1). However, we find economically and statistically significant interview differentials between black and white applicants with in-field work experience. In particular, the interview rate for black applicants with in-field work experience is 3.5 percentage points lower than that for white applicants with in-field work experience (column 2). In addition, the estimated difference in the interview rate between black and white applicants with in-field work experience is larger than the analogous differential for applicants with out-of-field experience, and it is statistically significant at conventional levels (column 3). [30]
In Table 9, we examine the racial gap in employment opportunities between job seekers with none, some, or all of the three aforementioned productivity signals. In particular, column (1) presents the estimated differential between black and white job applicants with non-business degrees, no internship experience, and out-of-field work experience; column (2) presents the estimated interview differential between black and white applicants with business degrees (also presented in column (2) of Table 8); column (3) presents the estimated interview differential between black and white applicants with business degrees and internship experience; and column (4) shows the estimated interview differential between black and white applicants with business degrees, internship experience, and in-field work experience. [31] We find no evidence of a racial gap in employment opportunities for applicants with non-business degrees, no internship experience, and out-of-field work experience (column 1). However, black applicants have a 19% lower interview rate than white applicants when both have business degrees (column 2). The racial gap in employment opportunities is larger when job seekers have business degrees and internship experience (column 3). In particular, black applicants have a 31% lower interview rate than their white counterparts. When applicants have business degrees, internship experience, and in-field work experience, black applicants have an interview rate that is 33% lower than that for otherwise identical white applicants (column 4). [32]
Racial gap in job opportunities with none, some, or all productivity signals.
(1) | (2) | (3) | (4) | |
Black | 0.008 | –0.031*** | –0.052*** | –0.067*** |
(0.014) | (0.008) | (0.017) | (0.024) | |
Productivity signals | ||||
Business degree | No | Yes | Yes | Yes |
Internship experience | No | No | Yes | Yes |
In-field experience | No | No | No | Yes |
Observations in cells | 1,610 | 1,941 | 643 | 671 |
Our final attempt to shed light on the channel through which discrimination operates is the methodology developed by Neumark (2012). [33] Using a heteroskedastic probit model that allows the variance of unobservables to depend on race, we decompose the marginal effect of race into two components: an effect that operates through the “level” and an effect that operates through the “variance.” The level component measures taste-based discrimination, while the variance component measures statistical discrimination. We find that the partial effect, which is the sum of the level and variance components, is −0.025, [34] which is consistent with what we find via the linear probability models presented in Table 3. The marginal effect through the level is −0.038 and the marginal effect through the variance is 0.013. The marginal effect through the level and the marginal effect through the variance are not statistically significant at conventional levels. [35] However, we emphasize the size of the estimate for the level component relative to the baseline estimate from Table 3 (−0.038 versus −0.022). The larger marginal effect through the level suggests that the baseline estimate tends to understate the extent of taste-based discrimination. These findings suggest that the structural parameter, i.e., the level component, is indeed negative and economically large, which could be interpreted as evidence of taste-based discrimination.
Ewens, Tomlin, and Wang (2014; ETW henceforth) posit a number of hypotheses that allow one to test for taste-based discrimination, which are able to test, albeit imperfectly, with our data. Their model predicts that the racial gap widens (narrows) as positive (negative) information is added – relative to a “no-information” baseline – to their fictitious applicant’s answer to rental-housing advertisements. While our data do not contain the “no-information” base case, which is an advantage of ETW’s reliance on rental-housing markets over résumé audits of the labor market, we observe an increase in the interview gap between black- and white-named applicants as positive attributes are added to their résumés. Furthermore, there is no difference between white- and black-named applicants who exhibit the lowest level of qualifications in our study – a non-business degree, no internship experience, and out-of-field work experience. These patterns in the data are consistent with Hypotheses 2A and 3A (see Section 2), which buttresses an argument favoring taste-based discrimination as the explanation for our findings. Moreover, the decomposition approach developed by Neumark (2012), which is designed to separate out taste-based discrimination and variance-based statistical discrimination, suggests our baseline model tends to understate the extent of taste-based discrimination.
Despite the empirical evidence pointing toward discrimination based on tastes, we are unable to rule out risk aversion on the part of employers as a possible explanation. Given that we applied to higher-skill jobs (i.e., those that require a college degree), it is likely that employers would expect to invest in the human capital of their new hires. Furthermore, there is uncertainty regarding applicant quality. If the signal-to-noise ratio is lower for blacks than it is for whites, employers would interview relatively fewer black applicants because of their aversion to risk, which could explain the larger racial gap at the “high-skill” level. The lower signal-to-noise ratio for blacks could result from the discounting of attributes possessed by blacks (relative to whites) because the employers have less experience interacting with blacks. It is difficult, perhaps impossible, for us to test for this type of discrimination, as we have no way of knowing the extent of interaction an employer has had with a particular group.
4.4 Discrimination in jobs with customer interaction
Becker (1971) contends that discrimination in hiring need not operate through employer preferences. Instead, discrimination can also occur via customer and/or employee discrimination. In this subsection, we examine whether the differential treatment by race is robust for jobs that require significant customer interaction. [36] While our data do not provide a clean test of customer discrimination, the submission of applications to many different types of jobs provides an indirect way of examining the possibility that discrimination could occur because of an employer’s beliefs about its customer base. Our approach is similar to that of Holzer and Ihlanfeldt (1998), who consider evidence of greater discrimination in jobs that require contact with customers, such as sales and service occupations, as evidence of customer discrimination.
In our case, we compare the employment opportunities facing black and white applicants for jobs that require contact with customers. To classify the job openings, we use the information conveyed in the job titles as a way to classify customer-focused and “other” jobs. In particular, we treat job titles that include the words “Customer,” “Sales,” “Advisor,” “Representative,” “Agent,” and “Loan Officer” as jobs that require interaction with the firm’s customers.
We estimate the following regression model:
The subscripts i, m, c, f, and j and the variables black,
The estimates for the linear combination
Racial discrimination in jobs with customer interaction.
(1) | (2) | (3) | (4) | (5) | |
Black | −0.036*** | −0.038*** | −0.042*** | −0.041*** | −0.044*** |
(0.013) | (0.014) | (0.013) | (0.013) | (0.013) | |
Words in job title: | |||||
Customer | Yes | Yes | Yes | Yes | Yes |
Sales | Yes | Yes | Yes | Yes | Yes |
Advisor | No | Yes | Yes | Yes | Yes |
Representative | No | No | Yes | Yes | Yes |
Agent | No | No | No | Yes | Yes |
Loan officer | No | No | No | No | Yes |
Observations in cells | 2,701 | 2,797 | 3,128 | 3,255 | 3,377 |
5 Conclusions
We present experimental evidence from a correspondence test of racial discrimination in the labor market for recent college graduates. The correspondence framework, which incorporates a detailed set of randomly assigned productivity characteristics for a large number of résumés from white- and black-named job candidates, provides a powerful method to detect racial discrimination among the college-educated. The analysis of survey data is unlikely to yield convincing evidence of discrimination among the college educated because of selection bias. The coarseness of the education variables (e.g., highest grade completed, school quality, and school inputs) and other productivity characteristics contained in prominent employment data series could also mask important premarket factors that predict differences in the skill distributions between black and white college graduates.
Our results indicate that black-named applicants are approximately 14% less likely than white-named applicants to receive interview requests. We find strong evidence that the racial gap in employment opportunities widens with perceived productivity characteristics. In addition, the differential treatment by race detected appears to operate primarily through greater discrimination in jobs that require significant customer interaction, as we find much larger black–white interview differentials (about 28%) when applying to such jobs. We demonstrate that the estimated black–white differentials in interview rates are unlikely to be driven by the uniqueness of the racially identifying names, socioeconomic status, gaps in work history, labor-market conditions, or greater racial discrimination against women. While it is difficult to determine the precise channel through which discrimination operates, our data tend to support taste-based discrimination, but we are unable to rule our risk aversion on the part of employers as a possible explanation.
Acknowledgments
We thank the Office of Research and Sponsored Programs at the University of Wisconsin – La Crosse and the Economics Department at Auburn University for generous funding. We also thank Taggert Brooks, Mary Hamman, Joanna Lahey, William Lincoln, James Murray, Mark Owens, Artie Zillante, and seminar participants at the 2013 Southern Economic Association and 2014 Society of Labor Economists annual meetings for helpful comments; David Neumark for sharing his programs; and Samuel Hammer, James Hammond, Lisa Hughes, Amy Lee, Jacob Moore, and Yao Xia for excellent research assistance.
References
Abel, J. R., R. Deitz, and Y. Su. 2014. “Are Recent College Graduates Finding Good Jobs?” Federal Reserve Bank of New York: Current Issues in Economics and Finance 20(1):1–8.Search in Google Scholar
Ahmed, A. M., and M. Hammarstedt. 2008. “Discrimination in the Rental Housing Market: A Field Experiment on the Internet.” Journal of Urban Economics 64(2):362–72.10.1016/j.jue.2008.02.004Search in Google Scholar
Aigner, D. J., and G. Cain. 1977. “Statistical Theories of Discrimination in Labor Markets.” Industrial and Labor Relations Review 30(2):175–87.10.1177/001979397703000204Search in Google Scholar
Altonji, J. G., and R. M. Blank. 1999. “Race and Gender in the Labor Market.” In Handbook of Labor Economics, edited by O. Ashenfelter and D. Card, Vol. 3, 3243–59. Elsevier.10.1016/S1573-4463(99)30039-0Search in Google Scholar
Anderson, L. R., R. G. Fryer Jr, and C. A. Holt. 2006. “Discrimination: Evidence from Psychology and Economics Experiments.” In Handbook on Economics of Racial Discrimination, edited by W. Rogers, 97–115. Northampton: Edward Elgar Publishing.Search in Google Scholar
Arrow, K. J. 1973. “The Theory of Discrimination.” In Discrimination in Labor Markets, edited by A. Orley and A. Rees, 3–33. Princeton, NJ: Princeton University Press.Search in Google Scholar
Ayres, I., and P. Siegelman. 1995. “Race and Gender Discrimination in Bargaining for a New Car.” American Economic Review 85(3):304–21.Search in Google Scholar
Baert, S., B. Cockx, N. Gheyle, and C. Vandamme. (2013). “Do Employers Discriminate Less if Vacancies are Difficult to Fill? Evidence from a Field Experiment.” IZA Discussion Paper Series No. 7145 (2013 January).Search in Google Scholar
Ball, S. B., C. Eckel, P. J. Grossman, and W. Zane. 2001. “Status in Markets.” Quarterly Journal of Economics 116(1):161–88.10.1162/003355301556374Search in Google Scholar
Becker, G. S. 1971. The Economics of Discrimination. 2nd ed. Chicago, IL: University of Chicago Press.10.7208/chicago/9780226041049.001.0001Search in Google Scholar
Bertrand, M., and S. Mullainathan. 2004. “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination.” American Economic Review 94(4):991–1013.10.3386/w9873Search in Google Scholar
Bertrand, M., D. Chugh, and S. Mullainathan. 2005. “Implicit Discrimination.” American Economic Review: Papers and Proceedings 95(2):94–8.10.1257/000282805774670365Search in Google Scholar
Bosch, M. M., A. Carnero, and L. Farre. 2010. “Information and Discrimination in the Rental Housing Market: Evidence from a Field Experiment.” Regional Science and Urban Economics 40(1):11–19.10.1016/j.regsciurbeco.2009.11.001Search in Google Scholar
Booth, A. L., A. Leigh, and E. Varganova. 2012. “Does Ethnic Discrimination Vary Across Minority Groups? Evidence from a Field Experiment.” Oxford Bulletin of Economics and Statistics 74(4):547–73.10.1111/j.1468-0084.2011.00664.xSearch in Google Scholar
Carlsson, M., and D.-O. Rooth. 2007. “Evidence of Ethnic Discrimination in the Swedish Labor Market Using Experimental Data.” Labour Economics 14(4):716–29.10.1016/j.labeco.2007.05.001Search in Google Scholar
Charles, K. K., and J. Guryan. 2008. “Prejudice and Wages: An Empirical Assessment of Becker’s the Economics of Discrimination.” Journal of Political Economy 116(5):773–809.10.1086/593073Search in Google Scholar
Charles, K. K., and J. Guryan. 2011. “Studying Discrimination: Fundamental Challenges and Recent Progress.” NBER Working Paper Series No. 17156.10.3386/w17156Search in Google Scholar
Combes, P.-P., B. Decreuse, M. Laouénan, and A. Trannoy. 2013. “Customer Discrimination and Employment Outcomes: Theory and Evidence from the French Labor Market.” Journal of Labor Economics 34(1).10.2139/ssrn.2432434Search in Google Scholar
Cornell, B., and I. Welch. 1996. “Culture, Information, and Screening Discrimination.” Journal of Political Economy 104(3):542–71.10.1086/262033Search in Google Scholar
Doleac, J. L., and L. C. D. Stein. 2013. “The Visible Hand: Race and Online Market Outcomes.” Economic Journal 123(572):F469–92.10.1111/ecoj.12082Search in Google Scholar
Eriksson, S., and D.-O. Rooth. 2014. “Do Employers Use Unemployment as Sorting Criterion When Hiring? Evidence from a Field Experiment.” American Economic Review 104(3):1014–39.10.1257/aer.104.3.1014Search in Google Scholar
Ewens, M., B. Tomlin, and L. C. Wang. 2014. “Statistical Discrimination or Prejudice: A Large Sample Field Experiment.” Review of Economics and Statistics 96(1):119–34.10.1162/REST_a_00365Search in Google Scholar
Fershtman, C., and U. Gneezy. 2001. “Discrimination in a Segmented Society: An Experimental Approach.” Quarterly Journal of Economics 161(1):351–77.10.1162/003355301556338Search in Google Scholar
Fryer, R. G., D. Pager, and J. L. Spenkuch. 2011. “Racial Disparities in Job Finding and Offered Wages.” Journal of Law and Economics 56(3):633–89.10.3386/w17462Search in Google Scholar
Glaeser, E., D. I. Laibson, J. A. Scheinkman, and C. L. Soutter. 2000. “Measuring Trust.” Quarterly Journal of Economics 115(3):811–46.10.1162/003355300554926Search in Google Scholar
Heckman, J. J. 1998. “Detecting Discrimination.” Journal of Economic Perspectives 12(2):101–16.10.1257/jep.12.2.101Search in Google Scholar
Heckman, J. J., and P. Siegelman. 1993. “The Urban Institute Audit Studies: Their Methods and Findings.” In Clear and Convincing Evidence: Measurement of Discrimination in America, edited by M. Fix and R. Struyk, 187–258. Washington, DC: Urban Institute.Search in Google Scholar
Holzer, H. J., and K. R. Ihlanfeldt. 1998. “Customer Discrimination and Employment Outcomes for Minority Workers.” Quarterly Journal of Economics 113(3):835–67.10.1162/003355398555766Search in Google Scholar
Hoynes, H., D. L. Miller, and J. Schaller. 2012. “Who Suffers During Recessions?” The Journal of Economic Perspectives 26(3):27–47.10.3386/w17951Search in Google Scholar
Kroft, K., F. Lange, and M. J. Notowidigdo. 2013. “Duration Dependence and Labor Market Conditions: Theory and Evidence from a Field Experiment.” Quarterly Journal of Economics 128(3):1123–67.10.3386/w18387Search in Google Scholar
Lahey, J. 2008. “Age, Women and Hiring: An Experimental Study.” Journal of Human Resources 43(1):30–56.10.3386/w11435Search in Google Scholar
Lahey, J., and R. A. Beasley. 2009. “Computerizing Audit Studies.” Journal of Economic Behavior and Organization 70(3):508–14.10.1016/j.jebo.2008.02.009Search in Google Scholar
Laouénan, M. 2013. “Hate at First Sight’: Evidence of Consumer Discrimination Against African Americans in the U.S.” Institut de Recherches Economiques et Sociales de I‘Université catholique de Louvain Discussion Paper 2013-32.Search in Google Scholar
Laouénan, M. 2014. “Can’t Get Enough’: Prejudice, Contact Jobs and the Racial Wage Gap in the U.S.” IZA Discussion Paper No. 8006.Search in Google Scholar
Levitt, S. D., and J. A. List. 2007. “What Do Laboratory Experiments Measuring Social Preferences Reveal About the Real World?.” Journal of Economic Perspectives 21(2):153–74.10.1093/acprof:oso/9780195328325.003.0015Search in Google Scholar
List, J. A. 2004. “The Nature and Extent of Discrimination in the Marketplace: Evidence from the Field.” Quarterly Journal of Economics 119(1):49–89.10.1162/003355304772839524Search in Google Scholar
Lundberg, S. J., and R. Startz. 1983. “Private Discrimination and Social Intervention in Competitive Labor Market.” American Economic Review 73(3):340–7.Search in Google Scholar
Neumark, D. 1996. “Sex Discrimination in Hiring in the Restaurant Industry: An Audit Study.” Quarterly Journal of Economics 111(3):915–41.10.2307/2946676Search in Google Scholar
Neumark, D. 2012. “Detecting Discrimination in Audit and Correspondence Studies.” Journal of Human Resources 47(4):1128–57.10.3386/w16448Search in Google Scholar
Nunley, J. M., M. F. Owens, and R. Stephen Howard. 2011. “The Effects of Information and Competition on Racial Discrimination: Evidence from a Field Experiment.” Journal of Economic Behavior and Organization 80(3):670–9.10.1016/j.jebo.2011.06.028Search in Google Scholar
Nunley, J. M., A. Pugh, N. Romero, and R. Alan Seals. 2014a. “Unemployment, Underemployment, and Employment Opportunities: Results from a Correspondence Audit of the Labor Market for College Graduates.” Auburn University Department of Economics Working Paper Series No. 2014-04.Search in Google Scholar
Nunley, J. M., A. Pugh, N. Romero, and R. Alan Seals 2014b. “College Major, Internship Experience, and Employment Opportunities: Estimates from a Résumé Audit.” Auburn University Department of Economics Working Paper Series No. 2014-03.Search in Google Scholar
Oreopoulos, P. 2011. “Why Do Skilled Immigrants Struggle in the Labor Market? A Field Experiment with Thirteen Thousand Résumés.” American Economic Journal: Economic Policy 3(4):148–71.10.1257/pol.3.4.148Search in Google Scholar
Phelps, E. S. 1972. “The Statistical Theory of Racism and Sexism.” American Economic Review 62(4):659–61.Search in Google Scholar
Price, J., and J. Wolfers. 2010. “Racial Discrimination among NBA Referees.” Quarterly Journal of Economics 125(4):1859–87.10.3386/w13206Search in Google Scholar
Rampell, C. 2013. “With Positions to Fill, Employers Wait for Perfection.” New York Times, March 6. http://www.nytimes.com/2013/03/07/business/economy/despite-job-vacancies-employers-shy-away-from-hiring.html?pagewanted=all _r=0Search in Google Scholar
Riach, P. A., and J. Rich. 2002. “Field Experiments of Discrimination in the Market Place.” Economic Journal 112(483):F480–F518.10.1111/1468-0297.00080Search in Google Scholar
Rooth, D.-O. 2010. “Automatic Associations and Discrimination in Hiring: Real World Evidence.” Labour Economics 17(3):523–34.10.1016/j.labeco.2009.04.005Search in Google Scholar
Spreen, T. L. 2013. “Recent College Graduates in the U.S. Labor Force: Data from the Current Population Survey.” Monthly Labor Review, February: 3–13.Search in Google Scholar
van Ravenzwaaij, D., H. L. J. van der Maas, and E.-J. Wagenmakers. 2011. “Does the Name-Race Implicit Associate Test Measure Racial Prejudice?” Experimental Psychology 58(4):271–7.10.1027/1618-3169/a000093Search in Google Scholar
Yinger, J. 1986. “Measuring Racial Discrimination with Fair Housing Audits: Caught in the Act.” American Economic Review 76(5):881–93.10.1142/9789813206670_0025Search in Google Scholar
Supplemental Material
The online version of this article (DOI: 10.1515/bejeap-2014-0082) offers supplementary material, available to authorized users.
©2015 by De Gruyter