Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter June 9, 2017

Constitutional Judicial Behavior: Exploring the Determinants of the Decisions of the French Constitutional Council

Romain Espinosa
From the journal Review of Law & Economics

Abstract

This article empirically assesses the relevance of three theories of judicial decision-making for the French Constitutional Council. Our empirical analysis follows previous works by integrating more recent observations, and proposes a new methodology by exploiting new data for cases post 1995. After analyzing the 612 cases published between 1974 and 2013, we focus on cases post 1995 for which we know the exact composition of the court. Our results suggest that (1) political/ideological voting occurs, (2) Justices restrain themselves from invalidating laws, and (3) a court’s independence suffers from political power concentration in other institutions. All in all, these results suggest the need for a reform of the Constitutional Council to strengthen its independence.

JEL Classification: D71; D72; K40

Appendix

A Time components

Decisions of the CC can be seen as a time series. In fact, we observe decisions of a unique actor over years, and the usual challenges faced by time series may also apply here. A fundamental limitation, however, consists in the fact that the decisions of the CC are not made on a regular basis (e. g. monthly, yearly), and that several decisions are sometimes published on the same day. To disentangle what is due to potential trend and cyclical components and what is due to the real effects of other explanatory variables, we propose to have a first look at the evolution of the censorship rates over time.

Figure 1 displays the yearly average censorship rate since 1974. In order to distinguish between cyclical components and long-trend components, we use a Hodrick-Prescrott filter. The trend component is presented in Figure 2. As we can see, the level of censorship has globally increased over the years. Concerning the yearly cyclical component, Figure 3 displays the average censorship rate per year of legislature. No real pattern can be observed from the data.

Figure 1: Average censorship rate per year since 1974.

Figure 1:

Average censorship rate per year since 1974.

Figure 2: Trend component of the censorship rate per year since 1974.

Figure 2:

Trend component of the censorship rate per year since 1974.

Figure 3: Average censorship rate per year of legislature.

Figure 3:

Average censorship rate per year of legislature.

Moreover, we also propose to investigate whether there exists a monthly trend and/or monthly cycles. To do so, we present the average monthly censorship rate in Figure 4. As we can see, no clear pattern appears from the data. One can note that the relatively high censorship rate in December is due to the budget bills, which always face high risks of censorship.

Figure 4: Average censorship rate per month.

Figure 4:

Average censorship rate per month.

Altogether, the only time component we are able to detect from the data is the global increase in censorship rate over the years studied.

B Robustness checks, autocorrelation, and regression diagnosis

B.1 Robustness checks

In order to control whether our results are driven by specific coding procedures or specifications, we have explored alternative solutions. First, as far as the econometric specification is concerned, we have also run probit and linear probability models, and the results were qualitatively identical.

Second, we also proposed an alternative coding for Justice Bazy-Malaurie. In fact, while Justice Bazy-Malaurie was originally appointed by Bernard Accoyer (right-wing), she was reappointed by Claude Bartolone (left-wing) at the beginning of 2013. We have therefore computed two versions of our indicators: one in which Justice Bazy-Malaurie remains a right-wing appointed Justice and one in which she switches to a left-wing appointed Justice after her reappointment. Results were nearly identical, since the correlation coefficient was close to 1.

B.2 Autocorrelation

A potential bias could emerge in our estimations if the error terms were correlated over time. Indeed: even if our dataset cannot be seen as a true time series (some days have more than one decision), the structure is nevertheless very close. This very special structure allowed us to create variables close to lag values of the dependent variable: previous is the average censorship rate of the last decision day.

Biases could emerge in our estimation if the error terms were correlated over time. To deal with this potential issue, we now consider the data as a time series, in which the decisions’ publication order corresponds to the time component. We replace the variable previous by the lag value of the dependent variable (L.censorship). When estimating this new model, we estimate a coefficient associated with L.censorship equal to 0.4465. This coefficient is statistically different from zero at the 5 % level, and statistically not different from what we found for previous. The coefficients associated with both variables have indeed the same sign, the same significance level, and the same magnitude.

Second, to detect potential serial correlation between our error terms, we compute the residuals of this last model (without clustering). The correlation coefficient between the error term and the lag value of the error term is equal to 0.008, which suggests no serial correlation. Moreover, graph 6 plots the error terms with the lag values of the error terms. The graph detects no autocorrelation. [23]

Third, in order to run the Breusch-Godfrey test, we run the previous model in a linear form (OLS regression) without clusters. [24] The probability of rightfully rejecting the null hypothesis of the Breusch-Godfrey test is equal to 0.7129. The test fails therefore at rejecting the no-serial correlation hypothesis, which confirms results presented in the article.

B.3 Logit diagnosis

Apart from collinearity, which was addressed in the above discussion, we now present several tests to check the validity of our estimations. As a baseline, we will focus on the specification of column 4 of Table 6 (1995–2013 time period with a linear trend). To check the robustness of our specification, we proceed to several tests as mentioned by Peng et al. (2002).

First, we proceed to a link test, which aims at testing whether our model is well specified. It regresses the censorship decisions on the predicted values and the square root of the predicted values. If the predicted values turn out to be significant, this entails that the model is not entirely misspecified. If the square root of the predicted values is highly significant, this implies that the model misses some important independent variables. Running the link test on the logistic regression of column 4 yields a significant coefficient for the predicted values (at the 1 % level), and to a non-significant coefficient for the square value of the predicted values at the 10 % level. This result confirms the quality of our estimation.

Second, we propose to run Hosmer and Lemeshow’s goodness-of-fit test. This test aims to measure the match between the predicted probabilities and the binary outcomes. Failing to reject the null hypothesis supports the empirical model. In our case, the probability of rightfully rejecting the null hypothesis is equal to 0.1829, which validates our estimation.

Moreover we propose to evaluate the prediction probabilities of our logit estimation, using the ROC curve analysis. The underlying idea of this instrument is to measure the number of false positives and of false negatives that the estimation produces. The explanatory power of the regression to discriminate between censorship and validation decisions is represented by the area under the curve of graph 7 in the appendix. As we can see, the area under the curve is close to 0.8, which indicates a good quality for the predicted values.

Last but not least, we look at the possibility that a few outliers drive our results. To do so, we plot three statistics associated with each observation: the Pearson residuals (Figure 8), the deviance residuals (Figure 9), and the Pregibon leverage (Figure 10).

Figure 5: Composition of challenged laws.

Figure 5:

Composition of challenged laws.

Figure 6: Plot of residuals and lag residuals.

Figure 6:

Plot of residuals and lag residuals.

Figure 7: ROC curve analysis.

Figure 7:

ROC curve analysis.

Figure 8: Standardized pearson residuals.

Figure 8:

Standardized pearson residuals.

Figure 9: Deviance residuals.

Figure 9:

Deviance residuals.

Figure 10: Leverage.

Figure 10:

Leverage.

As one can see from Figure 10, three observations stand out (352, 491, 492). In order to verify that these outliers do not drive our results, we run our baseline model without these observations. Results were not affected by the drop of these three observations, which supports the previous findings.

C Case selection by justices

The issue of case selection has been a major concern in the Law and Economics literature, since cases which ultimately reach courts may not be a representative sample of all conflicts that emerged. Some works have also addressed the issue of case selection by judges themselves, when cases are not randomly assigned within a court (Shayo and Moses, 2011).

In our case, the question of case selection by Justices comes from the fact that Justices do not attend every single case. The variation in the attendance rate of the CC’s Justices is useful to estimate the intensity of political/ideological voting. However, if Justices were to select which cases they will attend, and to miss cases they are willing to avoid, our estimations related to shareRW would be underestimated.

To investigate case selection by Justices, we present Justices’ attendance rates at the CC decisions in Table 9. [25] The first column displays the average attendance rate for each judge. In the second and third columns, we show attendance rates respectively for conformity and censorship decisions. The last column presents the p-values of the two-group mean-comparison tests (for conformity and censorship decisions).

Table 9:

Attendance rates per Justice (1995–2013).

(1)(2)(3)(4)
All decisionsConformityCensorshipP-value
Abadie0.9620.92510.087
Ameller111.
Barrot0.98610.9710.321
Bazy0.9840.96710.306
Belloubet111.
Cabannes111.
Canivet0.9340.9290.9380.829
Charasse0.95710.9140.083
Chirac0.5080.5190.50.887
Colliard0.9810.9720.9880.474
Dailly111.
Denoix de Saint Marc0.9920.98210.283
Debre0.9920.98210.283
Dumas0.9330.920.950.697
Dutheillet0.9870.97210.121
Faure0.9350.950.9090.67
Giscard d’Estaing0.5080.4390.5680.087
Guena0.990.9810.292
Guillenchmidt0.9470.9490.9460.943
Haenel0.9570.9430.9710.562
Joxe0.9040.8610.9410.09
Lancelot0.970.970.9710.983
Lenoir0.9870.97510.333
Maestracci111.
Mazeaud0.9810.9720.9880.474
Pelletier0.9850.96710.277
Pezant0.9560.9410.9680.487
Robert111.
Rudloff111.
Schnapper0.9560.9720.9420.359
Steinmetz0.9880.9870.9890.894
Veil0.9810.95810.059

  1. Note: To avoid low attendance rates due to illness leading to resignation, we assume mandates begin on the first day that Justices sit and end on the last day they sit.

First of all, we can remark that, except for the two former Presidents of the Republic, attendance rates are above 90 % for all Justices. Among the 30 regular Justices with rates above 90 %, 25 attended more than 95 % of the cases. This suggests that, if case selection occurs, it is very limited in its frequency.

Looking at the fourth column, one can notice that no Justice is actually below the 5% threshold, which would suggest case selection. It seems that Justices do not attend more regularly censorship decisions than conformity decisions. A potential concern could emerge from Justice Veil, who is very close to the threshold. In fact, she attended all censorship decisions but only 95.8% of the conformity decisions.

To further investigate potential case selection, Table 10 displays censorship rates when Justices attended the cases. The first three columns show censorship probabilities for decisions attended by Justices per category of law: (1) all laws, (2) laws voted by right-wing coalitions, and (3) laws voted by left-wing coalitions. The last column presents the p-values associated with the two-group mean-comparison tests for the level of censorship of cases attended for the two subsamples (laws voted by right-wing and by left-wing majorities).

Table 10:

Censorship rates for decisions attended by Justices.

(1)(2)(3)(4)
All lawsRight LawsLeft LawsP-value
Abadie0.5070.3910.5580.189
Ameller0.50.4070.5820.05
Barrot0.4930.5320.4090.349
Bazy0.5250.590.4090.181
Belloubet0.333.0.333.
Cabannes0.3550.360.3330.906
Canivet0.540.5640.4210.259
Charasse0.4850.5330.3810.255
Chirac0.5630.563..
Colliard0.5430.50.6070.2
Dailly0.3850.385..
Debre0.5420.5710.4090.17
Denoix de Saint Marc0.5420.5710.4090.17
Dumas0.4520.3750.5560.255
Dutheillet0.5510.5350.7140.201
Faure0.3450.3330.40.785
Giscard d’Estaing0.60.6450.3570.044
Guena0.5340.4440.5820.185
Guillenchmidt0.5430.5470.50.757
Haenel0.5070.5560.4090.267
Joxe0.5630.5470.7140.233
Lancelot0.5080.3570.5490.209
Lenoir0.4940.360.5580.107
Maestracci0.333.0.333.
Mazeaud0.5430.50.610.187
Pelletier0.5540.4410.6770.057
Pezant0.560.56..
Robert0.3550.360.3330.906
Rudloff0.250.25..
Schnapper0.5360.5180.7140.163
Steinmetz0.5480.5550.4620.519
Veil0.550.5050.6170.181

  1. Note: Missing values are due to the fact that some Justices served only under left-wing or right-wing legislatures.

As one can note, no regular Justice is below the 5% threshold in the last column. The former President of the Republic Giscard d’Estaing has a p-value below the threshold. This implies that Giscard d’Estaing attended more censorship decisions under right-wing legislatures than under left-wing legislatures.

Two regular Justices are very close to the 5% threshold, namely Justice Ameller and Justice Pelletier. As far as Justice Ameller is concerned, one can observe from Table 9 that he attended all cases, which implies that this difference in censorship is not due to case selection. Second, Justice Pelletier attended all cases but one, which also shows that she did not select cases she attended.

Regarding the data available to this date, it seems that appointed Justices do not select cases they attend.

Acknowledgements

I am indebted to Bruno Deffains and George Bresson for multiple readings, and their thorough and insightful remarks. I am also very grateful to Dominique Schnapper, Lee Epstein, Nuno Garoupa, Jeffrey Rachlinski, Jerg Gutmann, Sofia Amaral-Garcia, Pierre Bentata and two anonymous referees for their detailed and very helpful remarks on previous versions of this paper.

References

Amaral-Garcia, S., N. Garoupa, and V. Grembi. 2009. “Judicial Independence and Party Politics in the Kelsenian Constitutional Courts: The Case of Portugal,” 6(2) Journal of Empirical Legal Studies 381–404.10.1111/j.1740-1461.2009.01147.xSearch in Google Scholar

Balli, H.O., and B.E. Sørensen. 2012. “Interaction Effects in Econometrics,” 45(1) Empirical Economics 583–603.10.1007/s00181-012-0604-2Search in Google Scholar

Cameron, C.M., and L. Kornhauser. 2010. “Modeling Collegial Courts: Adjudication Equilibria,” Working Paper.10.2139/ssrn.1400838Search in Google Scholar

Carroll, R., and L. Tiede. 2011. “Judicial Behavior on the Chilean Constitutional Tribunal,” 8(4) Journal of Empirical Legal Studies 856–877.10.1111/j.1740-1461.2011.01243.xSearch in Google Scholar

Epstein, L., and W.M. Landes. 2012. “Was There Ever Such a Thing as Judicial Self-Restraint?,” 100 California Law Review 557–578.Search in Google Scholar

Epstein, L., W.M. Landes, and R.A. Posner. 2011. “Why (And When) Judges Dissent,” 3 Journal of Legal Analysis 101–137.10.1093/jla/3.1.101Search in Google Scholar

Epstein, L., and A.D. Martin. 2012. “Is the Roberts Court Especially Activist? A Study of Invalidating (And Upholding) Federal, State, and Local Laws,” 61 Emory Law School 737–758.Search in Google Scholar

Epstein, L., A.D. Martin, K.M. Quinn, and J.A. Segal. 2007. “Ideological Drift among Supreme Court Justices: Who, When, and How Important?” 101(4) Northwestern University Law School 1483–1541.Search in Google Scholar

Feld, L.P., and S. Voigt. 2003. “Economic Growth and Judicial Independence: Cross-Country Evidence Using a New Set of Indicators,” 19 European Journal of Political Economy 497–527.10.1016/S0176-2680(03)00017-XSearch in Google Scholar

Franck, R. 2009. “Judicial Independence under A Divided Polity: A Study of the Rulings of the French Constitutional Court, 1959–2006,” 25(1) Journal of Law, Economics and Organization 262–284.10.1093/jleo/ewn001Search in Google Scholar

Franck, R. 2010. “Judicial Independence and the Validity of Controverted Elections,” 12(2) American Law and Economics Review 423–461.10.1093/aler/ahq011Search in Google Scholar

Garoupa, N., F. Gomez-Pomer, and V. Grembi. 2011. “Judging under Political Pressure: An Empirical Analysis of Constitutional Review Voting in the Spanish Constitutional Court,” 29(3) Journal of Law, Economics and Organization 513–534.10.1093/jleo/ewr008Search in Google Scholar

Garoupa, N., and V. Grembi. 2013. “Judicial Review and Political Bias: Moving from Consensual to Majoritarian Voting,” Working Paper-2013.Search in Google Scholar

Hayo, B., and S. Voigt. 2007. “Explaining de Facto Judicial Independence,” 27 International Review of Law and Economics 269–290.10.1016/j.irle.2007.07.004Search in Google Scholar

Hoennige, C. 2009. “The Electoral Connection: How Pivotal Judge Affects Oppositional Success at European Constitutional Courts,” 32(5) West European Politics 963–984.10.1080/01402380903064937Search in Google Scholar

Holcombe, R.G., and C.S. Rodet. 2012. “Rule of Law and the Size of Government,” 8 Journal of Institutional Economics 49–69.10.1017/S1744137411000348Search in Google Scholar

La Porta, R., F. Lopez-de-Silanes, A. Shleifer, and R. Vishny. 1999. “The Quality of Government,” 15 Journal of Law, Economics and Organization 222–279.10.1093/jleo/15.1.222Search in Google Scholar

Lijphart, A. 1999. Patterns of Democracy. New Haven (US): Yale University Press.Search in Google Scholar

Martin, A.D., K.M. Quinn, and L. Epstein. 2005. “The Median Justice on the United States Supreme Court,” 83(5) North Carolina Law Review 1275–1320.Search in Google Scholar

Melton, J., and T. Ginsburg. 2014. “Does De Jure Judicial Independence Really Matter? A Reevaluation of Explanations for Judicial Independence,” 2(2) Journal of Law and Courts 187–217.10.2139/ssrn.2104512Search in Google Scholar

Miles, T.J., and C.R. Sunstein. 2008. “The Real World of Arbitrariness Review,” 75 The University of Chicago Law Review 761–814.10.2139/ssrn.1089076Search in Google Scholar

Pellegrina, L.D., and N. Garoupa. 2013. “Choosing between the Government and the Regions: An Empirical Analysis of the Italian Constitutional Court Decisions,” 52(4) European Journal of Political Research 558–580.10.1111/1475-6765.12003Search in Google Scholar

Peng, C.-Y., K.L. Lee, and G.M. Ingersoll. 2002. “An Introduction to Logistic Regression Analysis and Reporting,” 96(1) The Journal of Educational Research 3–14.10.1080/00220670209598786Search in Google Scholar

Ramseyer, J.M., and E.B. Rasmusen. 1997. “Judicial Independence in a Civil Law Regime: The Evidence from Japan,” 13(2) Journal of Law, Economics and Organization 259–286.10.1093/oxfordjournals.jleo.a023384Search in Google Scholar

Schnapper, D. 2010. Une Sociologue au Conseil Constitutionnel. Editions Gallimard.Search in Google Scholar

Shayo, M., and A. Zussman. 2011. “Judicial Ingroup Bias in the Shadow of Terrorism,” 126 Quarterly Journal of Economics 1447–1484.10.1093/qje/qjr022Search in Google Scholar

Spiller, P.T., and R. Gely. 1992. “Congressional Control or Judicial Independence: The Determinants of U.S. Supreme Court Labor-Relations Decisions, 1949–1988,” 23(4) RAND Journal of Economics 463–492.10.2307/2555900Search in Google Scholar

Published Online: 2017-6-9

© 2017 Walter de Gruyter GmbH, Berlin/Boston

Scroll Up Arrow