Skip to content
BY 4.0 license Open Access Published by De Gruyter March 20, 2020

Put Dialectics into the Machine: Protection against Automatic-decision-making through a Deeper Understanding of Contestability by Design

  • Claudio Sarra ORCID logo EMAIL logo
From the journal Global Jurist

Abstract

This paper endorses the idea that the right to contest provided for by art. 22, § 3 GDPR, actually is the apex of a progressive set of tools the data subject has at his disposal to cope with automatic decisions and it should work as an architectural principle to create contestable systems. But in order to achieve that important role, it cannot be reduce to the right of human intervention, also provided for by art.22, § 3, nor to a generic opposition to the outcome of the automatic processing. Thus, drawing from a thorough analysis of the relationships among the rights included in art. 22, § 3 GDPR as well as from the juridical proper meaning of “contestatio”, it is concluded that the right to contest has its own proper nature as a hybrid substantial-processual right that is able to give concrete shape to all the other rights indicated in art. 22, § 3, included the much discussed right to explanation.

1 Introduction: Contestability by design

Automatic decision making is nowadays considered one of the major challenge to fundamentals rights of European citizens for many different reasons (Kitchin 2017; Marwick 2012; Pasquale 2015; Zarsky 2016). Opacity of highly complex data processing ending in decisions that may have relevant social as well as legal consequences is, of course, the first main reason for concern, since it causes decisions to appear to the layman devoid of comprehensible rationale behind them (Burrell 2016; Guidotti, Monreale, and Ruggieri 2018; Harkens 2018).

Secondly, opacity can also hide bias and unfair discriminatory operations that may afflict decisional algorithms, especially when applied to a large scale (Sweeney 2013; Dobbe et al. 2018). Thirdly, since the decision taken can be due to classification of the specific case at stake into previously formed generalizations, the process can be flawed by an inductive mismatch: what can be said at the population level cannot necessarily be true at the individual level where individual new cases manifest themselves (Mittelstadt et al. 2016, 5–6).

A more general concern can be raised about the social opportunity to let significant sectors of social life be guided completely by machines, even at the cost to make them shape juridical relationships, something that raises issues about how to think about responsibility for rights violations (Wagner 2019).

The European Law has adopted a restrictive approach to decisions based solely on automatic processing since the days of art. 15 of the old Directive 95/46/CE (Bygrave 2001); a similar provision is now included in art. 22 of the General Data Protection Regulation (GDPR) which is receiving an increasing attention because of the importance of automatic-decision making in the so called “data driven society” (Boyd and Crawford 2012; Bygrave 2019; Faini 2019; Lycett 2013; Mayer-Schömberger and Cukier 2013; Sarra 2017, 2019; Zech 2017).

Art. 22 states that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”, but it also provides for significant exceptions in which cases some extra safeguards measures should be adopted to protect the data subject rights.

Among these ones, § 3 states that the data controller shall implement measures to assure a “right to contest the decision”. Recently this provision has raised a particular interest among scholars leading someone to talk about the existence in the GDPR of a contestability by design principle that goes along with the more generally acknowledged privacy by desing (Almada 2019; Mulligan, Kluttz, and Kohli 2020).

According to Marco Almada, contestability by design indicates the mandatory need to build decision-making systems in such a way to include the possibility to contest the outcome since their early design (Almada 2019).

But such a topic needs to be deepened through a thorough analysis of the relations among the extra rights provided for by art 22, § 3, as well as the proper juridical meaning of the concept of contestatio in order to avoid the risk to see the right to contest collapsing into different figures (typically, into the right to human intervention), on one side, or to reduce it to a mere faculty to “lament” about the outcome without any serious consequence for the data controller.

It must be taken into consideration that the right to contest is given the data subject exactly in those exceptional cases in which the recourse to an automatic decision is legit, thus the data subject is not supposed to complain simply about the automation per se. On the other hand, since art 22, § 3 provides also for other different safeguards, such as the right to human intervention and the right to express one’s own opinion, the explicit provision of a “right to contest” should be interpreted in a way to give it a specific content, without making it disappear behind the other two. Thus, it should be different from both: the mere request of a human intervention and the manifestation of an opinion although critical.

The problem arises because the data subject is supposed to contest the outcome of an automatic processing, in other words, that the timing for it is essentially the end of that processing. Now, if this is the case, and everything was done in perfect compliance with the GDPR, then the processing should have been started on a legitimate base, the information duties already fulfilled, including those about the existence of an automatic processing, with meaningful information about the logic involved as well as the significance and the envisaged consequences for the data subject (artt. 13, § 2, f), 14, § 2, g) GDPR). Moreover, the data subject could have already made use of his other rights during the proccessing (accessibility, rectification, erasure, restriction and so on). What else could he complain about at the end of the processing if he wants to use his right to contest?

So, as these preliminary considerations show, in interpreting art. 22, § 3, we are facing the risk to reduce the “right to contest” to a sort of useless “paper right”, either because “absorbed” into other provisions (human intervention, opinion, right to be informed, to access and so on), or simply because it turns out to be devoid of any specific meaning.

On the opposite, this paper supports the thesis that: a) the “right to contest” is the apex of a progressive set of legal safeguards, which is wider than explicitly indicated in the art. 22, 3, litera legis; b) that it cannot be absorbed by other “rights” but, on the contrary, if properly and fully exercised, it can include them; c) that it poses specific compliance duties on the data controller far more demanding than it may appear in first place.

In order to address these points, and to clarify them, I am going to recall the state of the art about the interpretation of art. 22 GDPR (§2), then I will explain why an improper interpretation of the relationships among the rights indicated in art. 22, § 3, can lead to paradoxes, that can only be avoided by giving a strong juridical interpretation of the “right to contest” in line with the historical and conceptual meaning of contestatio (§3), then I will show why the right to contest marks the climax of the safeguards against the risks connected with automatic decision making, and why that gives new arguments in favor of the existence of a right to explanation, suggesting also a way to interpret its content (§4), finally, I will show how the interpretation here endorsed can help the specific understanding of the contestability by design principle (§ 5–6).

2 Automatic decision-making processes in the context of the general data protection regulation: Art. 22 GDPR

As already mentioned, the General Data Protection Regulation (GDPR) deals with decisions based exclusively on automatic processing in art. 22, endorsing a restrictive approach in principle. Generally speaking, automatic decision-making having juridical consequences or affecting in a similar way the data subject is forbidden when it does not include any significant human intervention (art. 22, § 1).

On this point the ART29 Working Party (ART29WP 2018) gave its first interpretative suggestions in the Guidelines on Automated individual decision-making and Profiling for the Purposes of Regulation 2016/679, as revised and adopted on February 2018. According to this document, the decision is to be considered completely automatic even in those cases in which a human being intervenes but he lacks any significant authority to critically evaluate and change the decision taken by the machine (Art29WP 2018, 21; Wagner 2019). Thus, the prohibition set forth in art. 22, § 1, should apply in those cases in which the human intervention is lacking or acts like a mere nuncius of the machine "will".

However, the interpretation given by Art29WP presupposes as the only modality of relevant human intervention that one that comes at the end of the processing, with the aim to simply confirm or change the decision. Following this idea, a highly significant human intervention in a remote state of the processing should not be considered relevant, and the decision taken would be still considered based solely on automatic processing. This seems to be a debatable position, both on a strict literal interpretation (the case cited shows hardly a decision based solely on automation) and on a wider and functional one. As a matter of fact, considering relevant only an intervention in the end of highly complex mechanical procedures, may lead to an overcharging of responsibility upon the human involved, expecially for those systems that are acknowledged as outperforming humans, that is when machines are statistically more efficient than humans for certain tasks (Veale and Edwards 2018, 400). In those cases, the human being involved would face a dilemma: either confirm the decision taken by the machine, or to change it but being prepared to give his organization specific explanations about why the machine – usually so efficient – should be considered unreliable in the specific case. Since this could be a highly tricky task, we may expect him to always confirm the machine decision, thus becoming de facto a mere nuncius.

Here we can see the paradox of Art29Wp’s interpretation: on one hand, a highly significant intervention in prior stages of the processing would not count as human intervention even if it could lead to very different determination; on the other, a final intervention, even if made by an authoritative person, could easily resolve in a line of plain confirmation of the machine decision, especially in those fields demanding higher responsibility.

The interpretation of art. 22 GDPR is currently under debate (Brkan 2019; Bygrave 2019; De Hert and Papakonstantinou 2016; Kaltheuner and Bietti 2017; Mendoza and Bygrave 2017; Veale and Edwards 2018), whose most relevant points are the followings.

Firstly, scholars are discussing if the article provides for a right in favor of the data subject, or for a prohibition for the data controller. The litera legis actually talks about a “right not to be subject to a decision based solely on automatic processing” but those who agree with the “prohibition” thesis (including myself) argue that this is more in line with the general ratio of the GDPR to increase protection for the data subjects by focusing on the accountability of the data controller who must provide (and be able to demonstrate to have provided) for all the safeguards needed to protect the rights of the data subject. Thus, if art 22 is interpreted in the sense to set a general prohibition, the data controller is not allowed to take advantage on the inertia of the data subject.

As a matter of fact, if the efficacy of the restrictive principle stated by art. 22 depends on the initiative of the data subject, then, given the velocity, complexity and scale of automatic processes, there would be the serious risk to dis-empower that very principle.

Another interesting point is related to the fact that art. 22 GDPR deals with individual automatic decisions: that is, those processes ending in a stance towards a singular situation even if deployed by means of large quantities of data, not deriving only from the data subject, such as in profiling (Bosco et al. 2015; Crawford and Schultz 2014; Floridi 2012; Hildebrandt 2008b, 2016; Hildebrandt and Gutwirth 2008; Kennedy 2016; Leese 2014; Petkova and Bohem 2018; Hildebrandt 2008a).

In the literature (Brkan 2019, 100–101) the concrete efficacy of such a discipline has been disputed, given the fact that individual decisions are often the consequence of collective profiling and general policies (that is automatic collective decisions), and that an individual decision can be masked under a formal collective one.

Art. 22, moreover, is applicable to individual automatic decisions that are supposed to produce legal effects or to affect similarly and significantly the data subject. The interpretation of these last two expressions is, again, quite difficult, in particular the second one.

It should refer to decisions which, obviously, affect situations for which the recipient does not have a defined legal position but at most some form of legit expectation.

The examples given by Recital 71, namely the refusal of credit or automated hiring practices, are limitedly explanatory, referring to situations in which it is by no means excluded that we are dealing with legal positions in the full sense: think of all the obligations the parties are subject to because of the general clause of bona fide in the performance of contractual negotiations whose violation can also determine specific “legal effects”. To take the Italian case as an example, in accordance with the judicial interpretation of the principle of bona fide in negotiations, the unjustified interruption of them and the following refusal of financing import legal responsibility for the bank in those cases in which the previous negotiations reached the point of a legitimate legal expectation (which is called “affidamento”) in the credit applicant (see, for the general principle: ex plurimis Cass., n. 11438/2004; Cass., n. 7768/2007; for the specific case of credit denial: Trib. Piacenza, November 17, 2015, n. 846).

From another point of view, one can ask if the suitability to significantly affect the data subject should be assessed ex ante, with reference to a standardized indicator, or in relation to the moment in which the decision itself is taken and with reference to the specific features of the case in question. In line with the interpretation of the general deontic function recognized for the most part to the disposition, that is to say, to prescribe a prohibition for the data controller, the first hypothesis should be considered correct.

By virtue of the same general principle of accountability mentioned above, it is clear that the data controller must know in advance which of the processes he is organizing can be completely automated up to the final decision and which ones, instead, should not, providing, in these cases, a significant human intervention, and therefore structuring his organization accordingly. Thus the interpretation that refers to the adoption of a subjective ex post criterion would leave the data controller in doubt of not being fully compliant, or, on the other side, it would oblige him to adopt very flexible forms of processes that can be converted from totally automated to semi-automated with human intervention to the need. This seems to be too a demanding interpretation that poses the risk to excessively limits innovation.

The issue of automated decisions that “affect in a “similar” way as those with legal effects is also relevant for the general ethical issues raised about algorithmic decision making (Mittelstadt et al. 2016; Kraemer, van Overveld, and Peterson 2011). The very vagueness of that expression can be exploited to offer a sort of remedy to the problem of algorithmic potential discriminatory capacity (Sweeney 2013; Henderson 2017; Friedman and Nissenbaum 1996). Actually, decisions that exclude from opportunities (credit, job, educational, health opportunities, and so on) can be considered to affect significantly the data subject, and this is exactly the reason why the GDPR asks in these cases for a significant human intervention.

Thus, the “human in the loop” should evaluate the potential discriminatory or unfair threat of automated processing as well as act to protect the dignity of the recipients (Art29WP 2018, 21–22).

Instead, it seems more difficult to include in the prohibition of art. 22 GDPR the so called nudging techniques (Yeung 2017): i. e. those modalities of conditioning towards certain choices that do not exclude different options, although these are somehow discouraged, for example, because of the way they are presented. Even in these cases, however, a reading that emphasizes the restrictive principle, should admit the possibility of including at least those cases in which these techniques are brought to such a level of conditioning (without, of course, incurring in deception) to require a higher level of attention than the average on the users of the service in question, and always provided that the nudging results in significant consequences similar to legal effects for those subjects (Veale and Edwards 2018, 401).

3 Automatic decision making and “safeguards measures”: Demarcation and paradoxes

Moving now to the question of the right to contest, we must briefly discuss the exceptions to the general restrictive principle: that is, those situations in which it is permissible to process automatically data ending with a decision that has legal effects, or similarly affects the data subject, without any relevant human intervention.

Art. 22, § 3, GDPR, mentions three hypotheses referring to situations in which the decision: a) is necessary for the conclusion or execution of a contract; b) is authorized by the European Union or the Member State law to which the controller is subject; c) is based on the data subject’s consent.

In all three cases “adequate measures” must be provided to protect the rights and freedoms and legitimate interests of the interested party: by the data controller in cases a) and c), provided for in the regulatory authorizing acts in the case b) (Roig 2018).

We take into consideration, in particular, two issues here.

The first one concerns something which is usually not noted in the literature: with reference to the hypothesis sub b), the totally automated decision may be authorized by the law of the Member State “to which the data controller is subject” and not, therefore, of the State to which belongs the data subject. Thus, in hypothesis, it is possible that the controller is authorized by his own national Law to operate in an automated manner even if the decisions deriving from it are directed to recipients subject to other member states, where, perhaps, national law does not provide for similar authorizations.

In these cases, even the limitation clause stated in art. 23 GDPR would not help. This article provides for the possibility that the national law may limits the obligations and rights set forth in Articles 12–22 GDPR, when such restrictions respect the essence of fundamental rights and freedoms and are necessary and proportionate measures in a democratic society. Among those cases, art. 23 mentions that of the need for “protection of the interested party or of rights and the freedoms of others” (Article 23, i). But, again, this prerogative is given to the law to which the data controller and the processor – and not the data owner – are subject.

Since these are exceptional situations with respect to a restrictive principle designed to better protect the interested party, it would probably have been more appropriate to ask for (at least) a convergence on this point between the national systems to which both the data subject and the controller belong to.

Secondly, with regard to the common provision of suitable measures for the protection of the rights and freedoms of the interested party, only in cases a) and c) they are specified in the minimum, with the indication that they must guarantee at least the right to obtain the human intervention, to express one’s opinion, and, indeed, to contest the decision.

Such a specification is missing with reference to the hypothesis sub b), thus generating an interpretative doubt relevant to the topic that concerns us. In fact, the lack of the reference to this point could be understood as meaning that the Regulation has assigned to the law of the Union or of the Member State to which the data controller is subject a wider freedom of determination, even in the minimum, of the “adequate safeguards” against the risks derivable from automatic decision making. That would have the consequence that the rights mentioned above, and in particular the right to contest the decision, may not be confirmed by the authorizing legislation which, on the other hand, could provide for other different measures.

On the contrary, following a different interpretation (Art29WP 2018, 27), it could be argued that the concept of suitable measures, although made explicit in the minimum only with reference to the hypotheses under a) and c) – is instead implicit also in the case under b). Since making complex decision making systems is quite demanding in terms of organizational burden, having the reference of a mandatory minimum level of safeguards to put in place for every case of legit automatic decision process could help the data controller to foster his compliance duties.

The limits of this last interpretation are, of course, that it departs significantly from the litera legis of the third paragraph of the art. 22 GDPR which, actually, is very clear in excluding the hypothesis sub b).

Secondly, even arguing that the Regulation considers these rights as a minimum that cannot be denied, the protection given – as having a source in the Regulation itself – would be maximum with reference only to internal laws – by virtue of the c.d. primauté of EU law – but would instead be limited towards EU law itself. Thus, it would cover just half of the provision in art. 22, § 3, b).

However, there are other arguments to support the thesis that at least the rights to express one’s opinion and, for the reasons that will be better specified, also the right to contest the decision, are to be included in any case. Since, they express fundamental rights of the individual, they find coverage at a level that cannot be derogated from the Regulation itself.

From this same point of view, there may be some serious doubts, instead, concerning the “right to obtain human intervention”: perhaps time has come to ask whether it can be considered to correspond, today, in the information society, to a new fundamental right. Can we consider it a new “human right” that we could, maybe, try to define as “the right to have one’s own juridical and existential sphere changed significantly only as a consequence of a human relationship”? Now, the existence of such a fundamental right – or its existence in that formulation – is debatable, especially for those endorsing a radical version of that authoritative narrative about the rise of the information society as informed to a general objectivist and non-anthropocentric moral approach (Floridi 2002a, 2002b). According to this approach the ontology of the Cybersphere should be thought as including the human – at a certain level of abstraction – but in a wider context in which the underlying ontological unit is not the human being anymore but the concept of information, instead. Radicalizing this scenario, is it possible to refuse the idea of a fundamental human rights not to see one’s existence significantly modified without a proper human intervention, arguing that human and non-human are nowadays to be considered all “informational objects” interacting in the Infosphere, and legal protection should not hinder this state of affair.

This topic, which can only be hinted at here, is linked to that of the future of fundamental rights as “human” or, rather, “post-human” (Whitehead and Wesch 2012), as safeguard positions for any type of subjectivity, however defined, even including artifacts endowed with high-level functions capable of performing tasks with human-like level of autonomy and effectiveness.

Another interesting problem, although not discussed in the literature, is whether the rights mentioned here can be considered mutually exclusive. For example: would it be possible to claim for the right to contest when the right to a human intervention has already been exercised?

This specific problem arises due to the fact that, in order for the disposition to make any sense, the human intervention should be significant according to the same criterion we saw earlier to interpret the concept of decision not “solely” based on automated data processing. The person who is called to intervene should therefore have effective competence and authority to possibly change the decision taken by the machine. Now, this decision could be confirmed or rejected, but in both cases, it could no longer be said to be totally automated. Consequently, there would no longer be a hypothesis of exception from the general restrictive principle, since this one applies only for decision based on “solely” automated processing. Therefore, it could be argued, the safeguard measures envisaged for the exceptional cases, also should no longer be applied. In this sense, human intervention would determine the exhaustion of the specific rights enjoyed by the data subject in the situation of total automatism.

By interpreting this way, it should therefore be concluded that the rights provided for, at least that of obtaining human intervention and that of contesting the decision are, in fact, mutually exclusive.

Going deeper, other interesting considerations can be done for the hypothesis in which the subject immediately exercises his right to contest the decision instead of obtaining (only) human intervention. In fact, along with the contestation, also the right to express one’s opinion will also be exercised, since obviously the reasons for the dispute raised express the data subject’s standpoint. But what about the right to human intervention? Would it be also necessarily included in the contestation of the decision, and therefore be “absorbed” in it?

In the literature (Almada 2019), the right to human intervention has been considered the only way to cope with the right to contest, but this is not necessarily the case. As a matter of fact, the data controller could prepare another fully automated procedure for handling disputes (Mingardo 2017).

Is in the GDPR anything that could prevent this situation to occur?

An articulated contestation introduces further information on the data controller’s specific case, therefore, new personal data. The automatic processing of these for the purposes of settling the dispute would identify a new hypothesis of automatic decision to be considered, again, under the rules provided for by art. 22 GDPR. Now, the general principle, we saw, would prohibit a complete automatic processing ending in a decision (this time, about the contestation) fully automated. But nothing in the text seems to prevent the occurrence of one of the exceptional hypotheses stated in 22, § 3.

Actually, nothing seems to logically hinder a sort of recursive automation: the decision on the dispute raised could be a necessary condition to conclude or execute a contract, as well as could be authorized by the right of the Union or Member State of the data controller. Similarly, and this could be the most frequent situation, the data subject may also have given his consent even for this case.

In these situations, an automatic decision to settle the dispute could be legit, and, here too, the safeguard measures seen so far should be guaranteed: that is the right to obtain human intervention, to express one’s opinion and, again, to contest the decision. But at this point only asking explicitly for a human intervention could stop the chain of automatisms. This raises some paradoxes.

On one hand, the exercise of the right to human intervention would imply the consumption of the right to contest – because it would no longer occur in a completely automated process. On the other, the exercise of the right to contest alone faces the risk of triggering a chain of automatisms that can only be interrupted by the exercise of the right to human intervention, which, again, would lead to the exhaustion of the right to contest.

It should be noted that this paradox threatens, at the end of the day, precisely the right to contest, that risks to be abandoned to automated procedures, or to be exhausted in the mere request for human intervention which, however, does not substantiate a real process of taking charge of the dispute. Actually, this one is a more limited right and, for the reasons we are going to see, less protective than the right to contest understood in its proper sense.

To find some way out of this paradox, one can reason about the different nature and the different content of the two rights cited and, in particular, about the logical implications of the two different situations.

From this point of view, it appears legitimate to raise some doubts about the possibility of a totally automated management even of the contestation, for the reason that, properly intended, this one consists in a fully defensive act, totally particularized to the specific situation of the data subject, and therefore characterized by an eminent dialectical function. The strong singularity makes hardly acceptable its reduction to statistical-probabilistic determinations.

4 The right to contest ex art. 22, §3

The GDPR does not contain hypotheses of “right to contest” other than that provided for by art. 22, § 3. However, immediately before the article in question, in art. 21, the data subject is given the right to “oppose” the processing of his/her data for “reasons connected to his/her particular situation”, in cases where the basis of legitimacy of the processing has been found in letters e) and f) of art. 6. These are cases in which the processing is “necessary for the performance of a task of public interest or connected to the exercise of public authority over which the data controller is invested”, or when it is necessary to pursue the “legitimate interest of the data controller or third parties”.

Therefore, in the GDPR, two greatly different figures exist: the right to oppose and the right to contest. They both express ways of contrasting the activities of the data controller to protect the data subject.

The fact that they are distinct, and that art. 22 GDPR, immediately following the one that attributes the “right to oppose” talks of “right to contest”, give reasons to presume, also to a prima facie interpretation, an important diversity between the two figures that it is useful to better understand.

First of all, the right to object, pursuant to art. 21, refers to the “treatment” in itself, while the contestation refers to the “decision”: therefore, the first concerns the process, the latter the outcome of the processing.

In general, the right to object as designed by art. 21 GDPR has exclusionary effects, i. e. it acts as a “veto”: once exercised, the controller is obliged to interrupt the processing of data. This effect, however, appears to be mandatory only in the hypothesis of processing for marketing purposes, ex art. 21, paragraph 3), while in other cases it may be overcome. In the hypothesis of processing for marketing purposes, the opposition does not even require a real justification: it will therefore be a “mere veto”, from which it must follow the immediate stopping of the treatment.

In the other cases, the opponent will have to represent his “particular situation” in such a way as to show that it is worthy of constituting himself as exceptional with respect to the principle of legitimacy of the treatment invoked by the controller. This last mode of opposition will have the effect of reversing on the data controller who intends to continue to process the owner’s data, a further justification burden. This should focus on a balance in his favor of the interests at stake, either because of the existence of cogent reasons to proceed, or of the need for judicial protection of a right (Art. 21, § 1).

Thus, in the case of opposition, the very legitimacy of the treatment is at stake: the validity, in the concrete case, of the specific legal basis used by the controller. The specific hypotheses in which a right to oppose can be exercised are those in which the treatment was not carried out on the basis of the specific consent of the data controller (art. 6 letter a), nor of his previous negotiating will (lett. b), nor was the controller obliged by law to proceed with it (lett. c).

The right of opposition is there to protect a specific interest of the data subject: in this case, not to undergo a processing of his own data, which he considers harmful, in those hypotheses in which he has no control over the initiative of the treatment itself, since this does not depend on him (nor is the treatment mandatory), but on the controller who exercises it with a more or less wide discretion.

In some ways, therefore, this right offers the possibility of balancing the interests by setting a limit to the treatments activated outside the initiative of the interested party. And it is rather interesting to note that even in these hypotheses the opposition raised is not always sufficient to stop the processing of data.

But, in general, the legitimacy of the processing should not be confused with that of the modality (automated or not) of taking the decision that can follow. To give an example, the data subject may well have consented to the processing, even automated (therefore, making it legitimate, ex art. 6, a), but not to be the recipient of a decision that is entirely based on automation (and therefore there would not be the exception provided for Article 22, paragraph 2, letter c).

Thus, the recurrence of the hypotheses for which art. 21 provides for the right to object, as referring to the legitimacy of the processing and not to the decision-making procedure that could follow, may not be sufficient to legitimate also a totally automated decision.

Having clarified the different content of the rights in question, which will presumably correspond also to a different moment of their exercise, it remains to understand the specific object and function of the right to contest. No precise indications come to this regard from the Recitals or from the Guidelines of Art29WP, so it is necessary to reason interpretatively by taking a systematic approach.

In general, the right to challenge an act deemed to be detrimental to anyone’s rights and freedoms cannot be denied for the reason that it constitutes a necessary precondition of the right of defense which, in turn, is a fundamental right. Every judicial defense has at its heart a contestation (Moro 2012, 25–26; Cavalla 1991) which, indeed, has an essential value in the very constitution of a valid procedural relationship.

The term itself (“right to contest”) comes from the ancient contestatio from the Roman legal language. It indicated a crucial moment in the process (litis contestatio) passing through the phase in iure to the stage apud iudicem and with which the parties invoked witnesses (testes estote) in order to fix the terms of the dispute to be judged (Dalla and Lambertini 2001, ch. 3; Guarino 1981, 265, 275; Radin 1924, 405). Its fundamental characters were: publicity; the argumentative determination of the specific object to decide; the transformation of the juridical relationship from substantial to procedural, since the subsequent probative activity and argumentative as well as the pronunciation should have had as reference the dispute as fixed in the litis contestatio.

Thus, the act of contest marks the point of transformation of the substantial juridical relationship into a more specifically procedural one. It consists in the externalized articulation of the terms of a specific dispute, which is thus made public, so that it can be articulated in a procedure that leads to a judgment. It is therefore an act that has a very close relationship with the fundamental principles of the due process and, given its defensive nature, preparatory to the opening of a possible jurisdictional moment, it cannot contradict them.

Consequently, when art. 22, § 3, requires the data controller to provide safeguards measures to protect the rights and freedoms of the data subject and, among these, specifically (at least) the right to contest the decision, on the one hand, is providing a right other than a mere opposition – which, as mentioned, is regulated for other purposes. On the other hand, it is evidently doing something more than the mere recognition of a right to act judicially against the violation of rights, which, could not be denied.

It is, in effect, introjecting that specific fundamental right within the organization itself that uses automated processes, without, of course, ever being able to exclude its exercise in the common jurisdictional forms.

Indeed, by requiring the data controller to provide means for such a right to be secured, the Regulation evidently asks for some room within the organization that uses automation, to manage potential disputes.

And the rationale for such a provision is quickly stated: precisely because of the organizational effort that the data controller puts in place, the speed of the processes in which the automations are inserted as essential tools, and, on the other hand, of their potential damage, the law imposes, for the most elusive and threatening hypotheses, the establishment of specific places for the management of the dispute, faster and less formal than the jurisdictional means, but not less protective. The specific reference to the contestation refers to the need for these instruments that take care of the dispute, to be firmly based on the respect of procedural principles, first of all that of the audiatur et altera pars between the parties.

Since the right to contest is distinct from the other rights that must be minimally guaranteed, it clearly cannot be interpreted in such a way as to deprive it of any meaning by reducing it to one or another of them. In fact, compared to that of “expressing one’s opinion”, it must include much more, since a contestation is not the expression of a mere opinion, but an articulate act of defense. And with respect to the right to “require human intervention”, it requires a more sophisticated management: a contestation requires a true legal dialectic and a judgment (Mendoza and Bygrave 2017, 93).

Thus, forcing the data controller to guarantee the right to contest the decision means to prescribe to him to adopt forms of management of the dispute that respect co-implicated procedural rights, such as the right to be heard, the right to evidence and the right to an equidistant decision, which are at the very core of the western juridical tradition (Cavalla 2017, 210; Moro 2014; Moro 2004; Zanuso, Fuselli 2011). The first, which implies the structuring of a real argumentative and dialectical exchange and “testing” of reciprocal claims (Sommaggio 2012), ensures that the right to contest does not “collapse” into the mere human intervention; the second, that it does not result in the mere expression of a personal opinion, having to bring arguments in support; the third that there is an actual take-over of all the instances and not only those relating to the efficiency of the automatic processes.

The interpretation proposed here also makes it possible to find a more specific ratio in the exclusion of the provision of minimal means of protection in the hypothesis of automated decision authorized by the law of the Union or the Member State to which the controller belongs (art 22, paragraph 2, letter b), and to resolve the interpretative doubt mentioned in the previous paragraph.

These are cases, in fact, in which the automation of the relationships is not entirely referred to the data controller’s initiative for his interests only, but it is evaluated by the legislator in a balance with the public interest that will be expressed in the given regulatory framework.

Now, in that framework there are already structures to challenge an automated decision that violates rights, that is to say those that configure the organization of justice, thus the further institution of it would have been completely pleonastic. If anything, in this case the legislator will have to worry about establishing further measures with higher level of protection.

The right to contest, therefore, in this interpretation, far from being considered as a mere faculty of complaining about the decision, becomes the keystone of a system of protections that certainly is highly demanding for the data controller, who, on the other hand, greatly benefits from the automations, and requires the necessary collaboration between the parties in articulating a real controversial structure.

Taking the right to contest seriously means to include contestability in the very concept-building of algorithmic decision making systems which lead us to consider the existence in the GDPR of a contestability by design principle (Almada 2019; Mulligan, Kluttz, and Kohli 2020; Hildebrandt 2016)

In this interpretative perspective of the art. 22, par. 3, GDPR, the remedial means provided for therein are to be seen as organized in a progressive order of protection: from a minimum given by human intervention, to a maximum given by a real juridical-dialectical structure capable of absorbing the others, going much further, thus leaving to the subject the choice of which means to use before asking for a full-fledged judicial protection.

The progressiveness of the protections is even more evident in the formulation of Recital 71, in which a first hermeneutical aid to read art. 22 is offered. There, the rights that should constitute the hard core of the safeguard measures are more than those finally reported in the art. 22 itself.

Precisely, we find in addition: the right to “specific information to the interested party”, and to obtain “an explanation of the decision”. Read all together, the progression from specific information to the possibility of talking to someone, to obtaining technical explanations and, on the basis of all this acquired knowledge, the possibility of a complete contestation of the decision, in the fullest sense, is evident.

Does the lack of all these rights in the cogent normative text, in particular the right to specific information and the much discussed, right of explanation, imply that they are not due to the data subject? Can he seriously contest a decision without having the knowledge base they provide?

For the reasons mentioned above, the answer should be negative; rather, what has been omitted in the text of art. 22, § 3, should be considered implicit as conditions of operability of the recognized rights, so that they do not become meaningless. In particular, the right to contest, deprived of the most complete possibility of understanding the specific reasons that led to a decision, which, let us not forget, has “legal effects” or is similarly effective in relation to the interested party, would resolve in a somewhat useless mere faculty to complain about the received decision, which would not have a real protecting meaning and, above all, it would deny the juridical sense of the concept (see also Brkan 2019, 114; Pagallo 2018).

Finally, returning to the paradoxes mentioned above, since the acknowledged rights are arranged in a progressive manner, we expect the following to include the previous ones and not vice versa. Thus, the right to specific information (Recital 71) does not exhaust the right to human intervention; the latter does not exhaust that of expressing one’s opinion (which may, of course, be exercised together with the first but not necessarily), and all these ones do not exhaust the right to contest which can certainly be invoked even after having used the previous ones.

This last point deserves a little more explanation, in particular with reference to the relationships between the right to human intervention and the right to contest. At this point, it should be clear that, as far as the level of protection given is concern, the former may offer way less than the second, and then cannot absorb it. Actually, the presence of a “human in the loop” per se can be of little help for the receiver of the decision unless the parties are engaged in a full dialectical exchange. A mere human intervention, even by a qualified human, who simply checks again the correct functionality of the system and confirm or modifies the decision, without any substantial dialogue with the interested party, could satisfied the request for human intervention, but surely not the need for managing a full contestation in the sense outlined so far. Therefore, the exercise of the right to contest alone – because of its essential dialectical nature and its greater inclusiveness – also implies the use of others as conditions of possibility of a full contesting activity. If well exercised, it will find itself making use of all the implicit faculties in those previous ones, absorbing them, but not viceversa.

But in the previous pages a formalistic issue has been also raised. It has been pointed out that the mere human intervention – even if cannot guarantee a full protection as the right to contest, and thus cannot absorb its content – can nevertheless exhaust it. In fact, the mere presence of a (qualified) human in the loop, can put the case outside the conditions of applicability of art. 22 GDPR, which has to do with decision based “solely” on automated processing, thus precluding the recourse to other safeguard measures. This is a serious issue since it could lead to abuse. In fact, it can be exploited in order to easily deprive the interested party of the full protection: let us think to a company which gives instructions to its qualified executive to intervene only to confirm constantly the automatic decisions.

To deal with this problem I suggest to analytically distinguish the case of mere confirmation from that of substantial modification of the decisions.

The exhaustion problem should be limited to this second case, since a new and not completely automatic decision is taken.

In the first case, instead, since no modification has occurred, the case is still in the conditions determined by the machine alone, thus the other and more deep safeguard measures should be still available.

5 Contestation and explanation

The formalistic argument of the lack of mention in the normative text of art. 22 GDPR, of some rights included in Recital 71, was raised, together with others, in a fortunate contribution supporting the non-existence in the GDPR of a true and proper “right to explain the decision” to the data subject (Wachter, Mittelstadt, and Floridi 2017).

This is an extremely sensitive issue due to the fact that the most promising Artificial Intelligence tools, in particular those based on the c.d. deep learning, come along with a considerable complexity, as well as certain degrees of opacity about the specific modalities through which “well trained” machines reach the outcome (Burrell 2016; Larus et al. 2018, 9; LeCun, Bengio, and Hinton. 2015; Vellido, Martın-Guerrero, and Lisboa 2012).

Asking the data controller who uses such tools to be ready to provide an authentic and exhaustive explanation of the decisions taken by the machine on the basis of correlations that it has itself found, may appear to be equivalent to preventing him to use such technologies. In the current context of technological development this seems hardly acceptable and may raise doubts about the compatibility of the GDPR with the age of Big Data (Chivot and Castro 2019; Zarsky 2016/2017).

On the other hand, the right to contest is the true keystone of the progressive set-up for the protection of the data subject against the abuse of decision-making automatism. As already noticed, the exercise of this right triggers a dialectical exchange, asking for a decision, in respect of the procedural guarantees for a rational and fruitful discussion, but it has not a previously determined specific content. The contestation will make up a singular case, based on the arguments and reciprocal claims the parties raise one against another. Therefore, in general, the provision of a right to contest asks for an extension of the accountability duties of the data controller, beyond those related to the beginning and development of the data processing, such as the duties of information provide for artt. 13 and 14 GDPR.

We must therefore go into the discussions regarding the so-called right to explanation, because the request for further information and explanation for articulating the dispute seems legitimate. As a matter of fact, in order for a contestation to be solid it needs arguments, arguments need knowledge, and knowledge needs information. Therefore, denying the legitimacy of the data subject’s request for more information seems tantamount to deny his very right to contest.

On the other hand, it would not even make sense to impose on the controller an impossible obligation, given the intrinsic opacity of certain instruments.

So, the question about the existence and the content of a right to explanation cannot be solved in abstracto but only in relation with the needs of a substantial right to contest.

Now, to sum up briefly the state of the art about this point, we can divide the literature that has most dealt with the right to explanation, into three main orientations: on the one hand those who, as mentioned, doubt the very existence of such a right in the context of GDPR (Wachter, Mittelstadt, and Floridi 2017; Art29WP 2018, 25); on the other those who challenge this opinion (Goodman and Flaxman 2017; Brkan 2019); and, finally, those who believe that the problem must be either moved towards the reconfiguration of the type of right in question, for example, towards a less demanding right to legibility (Malgieri and Comandé 2017), or focused on other provisions of the Regulation, more directly related to the construction of modern reliable information societies (Edwards and Veale 2017).

The first opinion argues, as mentioned, from the lack of an explicit mention of the right in question in the normative text. Since it appeared in some of the preparatory texts presented and discussed the right to explanation, in conformity with Recital 71, its omission in the final text is considered an expression of a precise legislative intention to eliminate it. Nor could it be derived from other articles imposing information duties on the data controller; or giving the data subject the right to access. Indeed, they are supposed to be used before or during the processing but, in any case, before the outcome that closes it. Moreover, the law already provides for a specific level of information to be given, which does not amount to a full-fledged explanation. Article 13, paragraph 2, letter f, article 14, paragraph 2 letter g and art. 15 paragraph 1, letter h, talks about “meaningful information” on the type of process, on the “logic applied” and on the importance and envisaged consequences of the processing.

This communication, then, is not considered to constitute a real right of explanation, if by this we mean the right to know “the rationale, reasons, and individual circumstances of a specific automated decision, e. g. the weighting of features, machine-defined case-specific decision rules, information about reference or profile groups” (Wachter, Mittelstadt, and Floridi 2017, 78).

The first argument, it has been said, is excessively formalistic and radicalizes the distance between the Recital and the cogent text. This distance, certainly, exists, but, as the Authors themselves admit, the Recitals perform an important function of guiding interpretation. Therefore, the very presence in them of a more complex articulation of rights suggests not to overestimate, on a hermeneutical level, the literal argument, which is, after all, also an interpretative argument.

In this case, this argument would prescribe to exclude the right due to the mere fact that the direct sign of it does not appear. The argument is certainly not decisive, first because of the poor heuristic value of the so called “literal meaning” (Mazzarese 2000) – and secondly for the unjustified limit that seems to place to the systematic interpretation that instead works precisely on the relationships between concepts themselves, with a desirable direction towards the consistency of the represented legal system. Thus, when the doubtful formalistic argument lead to the consequences of depriving of meaning or of practical applicability, other rights that are, on the contrary, explicit, as in our case is that of the right to contest, then it must certainly be overcome in favor of a more consistent meaning.

From this point of view, those who admit the existence and relevance of the right to explanation can assume that Recital 71 does nothing more than say more openly – by clarifying the assumptions – what is conceptually included in the more synthetic normative expression, thereby offering suggestions for a more correct systematic interpretation.

The third approach among those mentioned is particularly interesting for two reasons. On the one hand, it emphasizes the idea that the construction of protection in the face of the critical issues raised by algorithmic decision-making must take place on several levels and not only on that of the comparison between rights and duties of the subjects involved. This is certainly true and conforms to the sense of social, economic and even cultural transformation in progress, which are changing even the structure of the communities whose work forge the shared juridical knowledge (Miller and Record 2013; Sarra 2018).

On the other hand, it acknowledges that the human-machine “commonality” actually takes place on the pragmatic level made by the acceptability of the results and not of the identity of the decision-making procedures actually employed. The machine that classifies and “decides” reaches its results according to procedures way different from the (holistic-interpretative) ones that we-humans would use even when both arrive at the same outcome (Crafa 2019; Sánchez Hidalgo 2019; Burrell 2016, 16). Here, as elsewhere, the use of anthropomorphizing terms, such as “decision”, “will”, etc., does not help because it creates the illusion of a priori commensurability of activities that, instead, are profoundly different.

The analytical description of the processes that the machine has carried out, including the indication of the connecting values (weights) between the nodes of a complex series of layers in a neural network, which it has itself optimized, shows the “how” of the process, and not the “why”. Even if it were possible to provide it in detail, it would not necessarily constitute an adequate explanation for the need to resolve the specific dispute and, therefore, it would be useless. The machine does not know the distinction, its own “why” is the “how”.

In other words, its operation expresses a purely analytical rationality, while juridical rationality is eminently dialectical (Cavalla 2011, 2017; Moro 2014, 2019). The request for the specific “why” can be radical, suitable to involve the premises at the base of the choices that led to the very structuring of an experience through automation. If necessary, it can also raise on it, as a whole, a request for justice.

This last point is particularly important and reveals what is really at stake in the discussions about the c.d. “right of explanation”. It is not the legitimacy of the request of the data subject to make the processes leading to the decision understandable. As mentioned, the informational rights referred to in Articles 13–14 and the right of access (15) already guarantee a certain level of ex ante comprehensibility of the processes. And it is correct, for what has been said so far, that once the decision whose consequences – “juridical” or in any case “significant” – are unacceptable, is obtained, the right to contest entitles the data subject to a higher and more specific level of information and explanation because of the very needs of a meaningful contestation.

On one hand, the ex ante information plan is abstract and identical for all those who are subject to the same type of automated procedure, on the other, the mere acceptance of the result produces a concordance between the controller and the data subject that is on the individual level but it is purely pragmatic, which therefore excludes the sharing of a complete rational vision of the story.

Both (the ex ante rational but abstract, and the individual but pragmatic) are made of exclusivist logics: the first ignores the irreducible diversity of each singularity, the second the intimately juridical need for a rational commensuration between the individual and common reason for actions.

By means of the contestation this theme opens up: a search for mediation, which is, actually, the search for a plan of practical rationality, including the controversial case, in the setting of generalities that identify the particular shared version of the “good” society. From the standpoint of history of law, this is the value of casuistic approach (Gábriš 2019)

The point is, therefore, that the very concept of “explanation” is not at all univocal. Even the notion already cited and discussed in literature is imperfect (Wachter, Mittelstadt, and Floridi 2017, 78): on the one hand it does not specify what is considered “rationale” or “reason”, and with what criteria to choose and limit the “individual circumstances”. On the other hand, it is not to be taken for granted that a juridically and pragmatically valid “explanation” must always include everything that can be interpreted as such by a certain technical knowledge.

In other words, the notion of explanation cited is aprioristic, abstract and made e lato titularis, because it reads the right of explanation in terms of the data controller’s need for compliance: what should a controller say whenever a data subject exercises this right. Instead, here the compliance needs must be seen in terms of means for mediating the dispute and not of mere pre-established fulfillment.

Therefore, the right of explanation must be seen as a condition of operability for the right to contest and that leads to reject an abstract and absolute notion of “explanation”: its content needs to be linked to the specific position that the interested party intends to assume in the controversy (see also: Wachter, Mittelstadt, and Russell 2018).

When he has no interest in challenging the decision, because he shares the outcome of the machine, even if his reasons are different from those that can be represented as “reasons” of the automated decision, there will be no sense in recognizing him a right to the analytical explanation: he could exercise, if he wants to, the right of human intervention and information together with that of expressing his own opinion to complete a picture which, however, is not under dispute.

When, on the other hand, he rejects the decision and, therefore, uses his right to contest, the request also for an explanation becomes legitimate and its precise content will depend on the opposition and the controversy that is being determined. In other words, the practicability of the right of explanation is connected to that of contestation and its precise content is also situational and relative: the controversial dialogue will establish how much the “explanation” should be a “technical description” (“how”) and how much, instead, a “justification” (“why”). In this sense, I propose to consider the “right” to explanation rather as a “duty” to answer the needs of comprehension about the entire relationship determined by the use of intelligent machines.

To conclude on this point, I argue that in the context of the safeguard measures towards the risks of automatic decision-making, there is no such a thing as a valid explanation per se. What counts as explanation cannot be established in advance, nor can be set an a priori level of technicality sufficient enough to elude the needs of a case by case approach to the many forms in which a contestation can be manifested.

The compliance on this point is rather evaluated on the basis of the tools and processes set forth to deal with the contestation with respect of the principles of the adversarial procedure and the need for an impartial mediative decision.

Anyone who decides to commit his relationships, both legal or otherwise able to have a significant impact on the existence of others, to machines must be ready to give an account (respondere) also of the basic choices of his organization, why he made use of more or less opaque instruments, and he must be able to demonstrate to take on a shared and social rationality, and not only a technical, sector-based and specialized knowledge.

6 Conclusions

In order to draw some conclusions of the journey made, I summarize in a few brief points what was said.

The proposal that has been supported in these pages is, in essence, to take very seriously the provision of the right to contest in the context of the discipline of automated decisions, and to interpret the latter in order to fully exploit it.

In this sense it has been shown, first of all, that there are very close interrelations between the subjective figures provided for by art. 22 GDPR and that the best reading – that which reduces or eliminates paradoxicalities – is that one which avoids seeing them in competition with each other. On the contrary, they must be seen in a fundamental progression of protection that has its culmination precisely in the right to contest.

Secondly, this progressive protection also implies the need to exploit the prerequisites for the exercise of the recognized rights. Therefore, those figures indicated in Recital 71 and not explicitly reported in the text of art. 22, such as the right to specific information and to explanation, should be considered included in the safeguards. Their content is a function of the de facto invoked protection: which is maximum in the case of a complex and punctual contestation, which will therefore be inclusive of all the other figures.

Thirdly, given the procedural nature of the right to contest, the possibility of its exercise implies the respect for the principle of the adversarial procedure, which in turn imposes an equal and constructive exchange for the best highlighting of both the controversial points and those on which the parties already agree upon.

Fourthly, the question of the “right of explanation” must be placed in this context. As a condition of practicability of a sensible contestation it certainly exists but is not substantiated in correspondence to an abstract model of rationality, let alone it should adhere to pure technical description.

The compliance of the data controller on the point must refer to the constitution of means and places of management of the dispute, as well as to its ability to adequately respond to the request for explanation in the terms in which it is actually exercised in the contestation: these are also the terms in which we can agree upon the existence of a contestability by design principle in the GDPR .

The explanation that must be given is referred to the case with the technical details that are sufficient for the construction of a dialogue that highlights the controversial point and can show that the determination made makes sense with respect (and is not detrimental) to the standard of widespread social acceptability for similar operations.

Fifth, everything should result in a judgment.

To conclude, on a more general level it should be noted that there is another reason why it is appropriate to insist on a radical enhancement of the right to contest.

It brings into play the need to measure the forms of rationality that operate in contemporary society and that intersect, unconsciously, in those points that are represented by decisions that affect intersubjective relationships.

Faced with complex and opaque processes, the factual acceptance of the outcomes is a pragmatic phenomenon that says nothing about the state of rational sharing of the subjects involved. The actual pervasiveness of the datafication and the presumably increasing presence of automated decision-making procedures, offers an image of the society made up of agents (human and not) crossing each other in the acceptance of mutual decisions without the exchange of a discursive rationality.

This is, of course, a sign of today’s domination of technology, and we cannot help but wonder if the law can really support it to the end. Let us remember that even the acceptability of a certain arrangement can be a technical product: the product of a power so strong that it is able to eliminate the critical point of view (the one that contests, articulating reasons on which it claims rational exchange), thereby also eliminating the need to realize and measure oneself to it.

Would we still call “ius” a regulation that allows this?

References

Almada, M. Human Intervention in Automated Decision-Making: toward the Construction of Contestable Systems. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 26 gennaio 2019.10.1145/3322640.3326699Search in Google Scholar

ART29WP. Guidelines on Automated individual decision-making and Profiling for the Purposes of Regulation 2016/679, last Revised and Adopted on 6 February 2018.Search in Google Scholar

Bosco, F., N. Creemers, V. Ferrari, D. Guagnin, and B.-J. Koops. 2015. “Profiling Technologies and Fundamental Rights and Values: Regulatory Challenges and Perspectives from European Data Protection Authoritie.” In Reforming European Data Protection Law, edited by S. Gutwirth, R. Leenes, and P. de Hert, 3–33. Dordrecht: Springer.10.1007/978-94-017-9385-8_1Search in Google Scholar

Boyd, D., and K. Crawford. 2012. “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon.” Information, Communication & Society 15 (5): 662–79.10.1080/1369118X.2012.678878Search in Google Scholar

Brkan, M. 2019. “Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond.” International Journal of Law and Information Technology 27 (2): 91–121.10.2139/ssrn.3124901Search in Google Scholar

Burrell, J. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 1–12.10.1177/2053951715622512Search in Google Scholar

Bygrave, L. A. 2001. “Automated Profiling: Minding The Machine: Article 15 Of The EC Data Protection Directive And Automated Profiling.” Computer Law & Security Review 17 (1): 17–24.10.1016/S0267-3649(01)00104-2Search in Google Scholar

Bygrave, L. A. Minding the Machine V2.0: The EU General Data Protection Regulation and Automated Decision Making. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 6 febbraio 2019. https://papers.ssrn.com/abstract=3329868.10.1093/oso/9780198838494.003.0011Search in Google Scholar

Cavalla, F. 1991. La prospettiva processuale del diritto. Saggio sul pensiero di Enrico Opocher. Padova: CEDAM.Search in Google Scholar

Cavalla, F. 2011. All’origine del diritto, al tramonto della legge. Napoli: Jovene.Search in Google Scholar

Cavalla, F. 2017. L’origine E Il Diritto. Milano: FrancoAngeli.Search in Google Scholar

Chivot, E., and D. Castro. The EU Needs to Reform the GDPR to Remain Competitive in the Algorithmic Economy. Center for Data Innovation (blog), 13 maggio 2019.Search in Google Scholar

Crafa, S. 2019. “Artificial Intelligence and Human Dialogue.” Journal of Ethics and Legal Technologies 1 (1): 44–56.Search in Google Scholar

Crawford, K., and J. Schultz. 2014. “Big Data and Due Process: Towards a Framework to Redress Predictive Privacy Harms.” Boston College Law Review 55 (1): 93–125.Search in Google Scholar

Dalla, D., and R. Lambertini. 2001. Istituzioni di Diritto romano. Torino: Giappichelli.Search in Google Scholar

De Hert, P., and V. Papakonstantinou. 2016. “The New General Protection Regulation: Still a Sound System for the Protection of Individuals?” Computer Law and Security Review 179: 179–94.10.1016/j.clsr.2016.02.006Search in Google Scholar

Dobbe, R., S. Dean, T. Gilbert, and N. Kohli 2018. A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. arXiv:1807.00553. http://arxiv.org/abs/1807.00553.Search in Google Scholar

Edwards, L., and M. Veale. 2017. “Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You are Looking For.” Duke Law and Technology Review 16 (1): 1–65.10.31228/osf.io/97upgSearch in Google Scholar

Faini, F. 2019. Data Society. Torino: Giappichelli.Search in Google Scholar

Floridi, L. 2002a. “On the Intrinsic Value of Information Objects and the Infosphere.” Ethics and Information Technology 4 (4): 287–304.10.2139/ssrn.3849254Search in Google Scholar

Floridi, L. 2002b. “What Is the Philosophy of Information?” Metaphilosophy 33 (1–2): 123–45.10.1093/acprof:oso/9780199232383.003.0001Search in Google Scholar

Floridi, L. 2012. “Big Data and Their Epistemological Challenge.” Philosophy & Technology 25 (4): 435–37.10.1007/s13347-012-0093-4Search in Google Scholar

Friedman, B., and H. Nissenbaum. 1996. “Bias in Computer Systems.” ACM Transactions on Information Systems 14 (3): 330–47.10.4324/9781315259697-23Search in Google Scholar

Gábriš, T. 2019. “Systematic versus Casuistic Approach to Law: on the Benefits of Legal Casuistry.” Journal of Ethics and Legal Technologies 1: 57–76.Search in Google Scholar

Goodman, B., and S. Flaxman. 2017. “European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation.” AI Magazine 38 (3): 50–57.10.1609/aimag.v38i3.2741Search in Google Scholar

Guarino, A. 1981. Storia del diritto romano. Napoli: Jovene.Search in Google Scholar

Guidotti, R., A. Monreale, S. Ruggieri, et al. 2018. “A Survey of Methods for Explaining Black Box Models.” ACM Computing Surveys 51 (5): 1–42.10.1145/3236009Search in Google Scholar

Harkens, A. 2018. “The Ghost in the Legal Machine: Algorithmic Governmentality, Economy and the Practice of Law.” Journal of Information, Communication & Ethics in Society 16/31: 16–31.10.1108/JICES-09-2016-0038Search in Google Scholar

Henderson, T. Does the GDPR Help or Hinder Fair Algorithmic Decision-Making? SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 21 agosto 2017.10.2139/ssrn.3140887Search in Google Scholar

Hildebrandt, M. 2008a. “‘Defining Profiling: A New Type of Knowledge’, in Hildebrandt.” In Profiling the European Citizen. Cross-Disciplinary Perspectives, edited by S Gutwirth. 17–35. Netherlands: Springer.10.1007/978-1-4020-6914-7_2Search in Google Scholar

Hildebrandt, M. 2008b. “Profiling and the Rule of Law.” Identity in the Information Society 1: 55–70.10.1007/s12394-008-0003-1Search in Google Scholar

Hildebrandt, M. 2016. “The New Imbroglio. Living with Machine Algorithm.” In The Art of Ethics in the Information Society, edited by L. Janssens, 55–60. Amsterdam: Amsterdam University Press.10.5117/9789462984493Search in Google Scholar

Hildebrandt, M., and S. Gutwirth, eds. 2008. Profiling the European Citizen. Cross-disciplinary Perspectives, 1. Netherlands: Springer.10.1007/978-1-4020-6914-7_1Search in Google Scholar

Kaltheuner, F., and E. Bietti. 2017. “Data Is Power: Towards Additional Guidance on Profiling and Automated Decision-making in the GDPR.” Journal of Information Rights, Policy and Practice 2 (2): 1–17.10.21039/irpandp.v2i2.45Search in Google Scholar

Kennedy, H. 2016. Post, Mine, Repeat Social Media Data Mining Becomes Ordinary. London: Palgrave Macmillan UK.10.1057/978-1-137-35398-6Search in Google Scholar

Kitchin, Rob. 2017. “Thinking Critically about Researching Algorithms.” Information, Communication & Society 20 (1): 14–29.10.4324/9781351200677-2Search in Google Scholar

Kraemer, Felicitas, Kees van Overveld, and Martin Peterson. 2011. “Is There an Ethics of Algorithms?” Ethics and Information Technology 13 (3): 251–60.10.1007/s10676-010-9233-7Search in Google Scholar

Larus, J., C. Hankin, S. Granum Carson, M. Christen, S. Crafa, et al. 2018. When Computers Decide: European Recommendations on Machine-Learned Automated Decision MakingACM .10.1145/3185595Search in Google Scholar

LeCun, Y., Y. Bengio, and G. Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–44.10.1038/nature14539Search in Google Scholar

Leese, M. 2014. “The New Profiling: Algorithms, Black Boxes and the Failure of Anti-discriminatory Safeguards in the European Union.” Security Dialogue 45 (5): 494–511.10.1177/0967010614544204Search in Google Scholar

Lycett, M. 2013. “Datafication’: Making Sense of (Big) Data in a Complex World.” Journal of Information Systems 22: 381–86.10.1057/ejis.2013.10Search in Google Scholar

Malgieri, G., and G. Comandé. 2017. “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation.” International Data Privacy Law 7 (4): 243–65.10.1093/idpl/ipx019Search in Google Scholar

Marwick, A. 2012. “The Public Domain: Social Surveillance in Everyday Life.” Surveillance and Society 4: 378–93.10.24908/ss.v9i4.4342Search in Google Scholar

Mayer-Schömberger, V., and K. Cukier. 2013. Big Data. A Revolution that Will Transform How We Live Work and Think. Boston-New York: Houghton Miffling Hardcourt.Search in Google Scholar

Mazzarese, T. 2000. “Interpretazione letterale: giuristi e linguisti a confronto.” In Significato letterale e interpretazione del diritto, edited by V. Velluzzi, 95–136. Torino: Giappichelli.Search in Google Scholar

Mendoza, I., and L. A. Bygrave. 2017. “The Right Not to Be Subject to Automated Decisions Based on Profiling.” In EU Internet Law: Regulation and Enforcement, edited by Tatiana-Eleni Synodinou, Philippe Jougleux, Christiana Markou, and e Thalia Prastitou, 77–98. Cham: Springer International Publishing .10.1007/978-3-319-64955-9_4Search in Google Scholar

Miller, B., and I. Record. 2013. “Justified Belief in a Digital Age: on the Epistemic Implications of Secret Internet Technologies.” Episteme 10 (2): 117–34.10.1017/epi.2013.11Search in Google Scholar

Mingardo, L. 2017. “Online Dispute Resolution. Involuzioni ed evoluzioni di telematica giuridica.” In Tecnodiritto: temi e problemi di informatica e robotica giuridica, edited by P. Moro, and C. Sarra, 121–40. Milano: FrancoAngeli.Search in Google Scholar

Mittelstadt, B. D., P. Allo, M. Taddeo, S. Wachter, and L. Floridi. 2016. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society 3 (2): 1–21.10.1177/2053951716679679Search in Google Scholar

Moro, P. 2012. “Il diritto come processo. Una prospettiva critica per il giurista contemporaneo.” In Il diritto come processo. Principi regole e brocardi per la formazione critica del giurista, edited by P. Moro, 9–36. Milano: FrancoAngeli.Search in Google Scholar

Moro, P. 2014. All’origine del Nómos nella Grecia classica. Una prospettiva della legge per il presente. Milano: FrancoAngeli.Search in Google Scholar

Moro, P. 2019. “Intelligenza Artificiale E Professioni Legali. La Questione Del Metodo.” Journal of Ethics and Legal Technologies 1: 24–43.Search in Google Scholar

Moro, P. 2004. La via della giustizia, Pordenone: Libreria Al Segno.Search in Google Scholar

Mulligan, K. D., D. N. Kluttz, and N. Kohli. forthcoming 2020. “Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions.” In After the Digital Tornado, edited by K. Werbach. Cambridge University Press.Search in Google Scholar

Pagallo, U. 2018. “Algo-Rhythms. The Beat of the Legal Drum.” Philosophy and Technology 31 (4): 507–24.10.1007/s13347-017-0277-zSearch in Google Scholar

Pasquale, F. 2015. The Black Box Society: the Secret Algorithms that Control Money and Information. Cambridge, Massachussets: Harvard University Press.10.4159/harvard.9780674736061Search in Google Scholar

Petkova, B., and F. Bohem. 2018. “Profiling and the Essence of Data Protection.” In Cambridge Handbook of Consumer Privacy, edited by J. Polonetsky, O. Tene, and E. Selinger, 285–300. Cambridge: Cambridge University Press.10.1017/9781316831960.017Search in Google Scholar

Radin, M. 1924. “Fundamental Concepts of the Roman Law.” California Law Review 12 (5): 393–410.10.2307/3475876Search in Google Scholar

Roig, A. 2018. “Safeguards for the Right Not to Be Subject to a Decision Based Solely on Automated Processing (Article 22 GDPR).” European Journal of Law and Technology 8 (3): 1–17.Search in Google Scholar

Sánchez Hidalgo, A. J. 2019. “Neuro-Evolucionismo y Deep Machine Learning: nuevos desafíos para el derecho.” Journal of Ethics and Legal Technologies 1: 115–36.Search in Google Scholar

Sarra, C. 2017. “Business Intelligence Ed Esigenze Di Tutela: Criticità Del C.d. Data Mining.” In Tecnodiritto. Temi e problemi di informatica e robotica giuridica, edited by P. Moro, and C. Sarra, 41–63. Milano: FrancoAngeli.Search in Google Scholar

Sarra, C. 2018. “Iper-positività’: la riduzione del giuridicamente lecito al tecnicamente possibile nella società dell’informazione.” In Positività giuridica. Studi ed attualizzazione di un concetto complesso, edited by C. Sarra, and M.a I. Garrido Gómez, 95–125. Padova: Padova University Press.Search in Google Scholar

Sarra, C. 2019. “Data Mining and Knowledge Discovery. Preliminaries for a Critical Examination of the Data Driven Society.” Global Jurist 0 (0).10.1515/gj-2019-0016Search in Google Scholar

Sommaggio, P. 2012. Contraddittorio, giudizio, mediazione. La danza del demone mediano. Milano: FrancoAngeli.Search in Google Scholar

Sweeney, L. 2013. “Discrimination in Online Ad Delivery.” Communications of the ACM 56, no. 5 (maggio): 44–54.10.1145/2447976.2447990Search in Google Scholar

Veale, M., and L. Edwards. 2018. “Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-making and Profiling.” Computer Law & Security Review 34 (2): 398–404.10.1016/j.clsr.2017.12.002Search in Google Scholar

Vellido, A., J. D. Martın-Guerrero, and P. J G Lisboa. 2012. Making Machine Learning Models Interpretable. in ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning Computational Intelligence, 163–72.Search in Google Scholar

Wachter, S., B. Mittelstadt, and L. Floridi. 2017. “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.” International Data Privacy Law 7 (2): 76–99.10.1093/idpl/ipx005Search in Google Scholar

Wachter, S., B. Mittelstadt, and C. Russell. 2018. “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of ALw and Technology 31 (2): 841–87.10.2139/ssrn.3063289Search in Google Scholar

Wagner, B. 2019. “Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems.” Policy & Internet 11 (1): 104–22.10.1002/poi3.198Search in Google Scholar

Whitehead, N. L., and M. Wesch. 2012. Human No More: Digital Subjectivities, Unhuman Subjects, and the End of Anthropology. Bouder, Colorado: University Press of Colorado.Search in Google Scholar

Yeung, K. 2017. “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” Information, Communication & Society 20 (1): 118–36.10.4324/9781351200677-8Search in Google Scholar

Zanuso, F., and S. Fuselli. 2011. Il lascito di Atena. Funzioni, strumenti ed esiti della controversia giuridica. Milano: FrancoAngeli.Search in Google Scholar

Zarsky, T. 2016. “The Trouble with Algorithmic Decisions: an Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making.” Science, Technology, & Human Values 41 (1): 118–32.10.1177/0162243915605575Search in Google Scholar

Zarsky, T. 2016/2017. “Incompatible: the GDPR in the Age of Big Data.” Seton Hall Law Review 47: 995–1020.Search in Google Scholar

Zech, H. 2017. “Building a European Data Economy.” IIC - International Review of Intellectual Property and Competition Law 48 (5): 501–03.10.1007/s40319-017-0604-zSearch in Google Scholar

Published Online: 2020-03-20

© 2020 Claudio Sarra, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.5.2023 from https://www.degruyter.com/document/doi/10.1515/gj-2020-0003/html
Scroll to top button