SEARCH CONTENT

You are looking at 51 - 60 of 300 items :

  • Media Theory x
Clear All
FREE ACCESS

Abstract

Data engagement has become an important facet of engaged citizenship. While this is celebrated by those who advocate for expanding participatory channels in civic experience, others have rightfully expressed concern about the complicated dimensions of balancing access with data literacy. If engaged citizenship increasingly requires the ability to interpret civic data through city dashboards and open data portals, then there is a concomitant requirement for diverse populations to develop critical perspectives on data representation (what is commonly referred to as data visualisation, information graphics, etc.). Effective data representations are used to ground conversations, communicate policy ideas and substantiate arguments about important civic issues, but they are also frequently used to deceive and mislead. Expanding statistical, graphical, digital and media literacy is a necessary component of fostering a critical data culture, but who are the beneficiaries of expanded models of literacy and modes of civic engagement? Which communities are invalidated in the design of civic data interfaces? In this article, I summarise the results of a design study undertaken to inform the development of accessible data representation techniques. In this study, I conducted fourteen 2-h participatory design-inspired interview sessions with blind and visually impaired citizens. These sessions, in which I iteratively developed new physical data objects and assessed their interpretability, leveraged a public transit dataset made available by the City of Toronto through its open data portal. While ostensibly “open,” this dataset was initially published in a format that was exclusively visual, excluding blind and visually impaired citizens from engaging with it. What I discovered through the study was that the process of translating 2D, screen-based civic dashboards and data visualisations into tangible objects has the capacity to reintroduce visual biases in ways that data designers may not generally be aware of.

Abstract

This article considers the medial logics of American terrorist watchlist screening in order to study the ways in which digital inequities result from specific computational parameters. Central in its analysis is Secure Flight, an automated prescreening program run by the Transportation Security Administration (TSA) that identifies lowand high-risk airline passengers through name-matching algorithms. Considering Secure Flight through the framework of biopolitics, this article examines how passenger information is aggregated, assessed and scored in order to construct racialised assemblages of passengers that reify discourses of American exceptionalism. Racialisation here is neither a consequence of big data nor a motivating force behind the production of risk-assessment programs. Both positions would maintain that discrimination is simply an effect of an information management system that considers privacy as its ultimate goal, which is easily mitigated with more accurate algorithms. Not simply emerging as an effect of discriminatory practices at airport security, racialisation formats the specific techniques embedded in terrorist watchlist matching, in particular the strategies used to transliterate names across different script systems. I argue thus that the biopolitical production of racialised assemblages forms the ground zero of Secure Flight’s computational parameters, as well as its claims to accuracy. This article concludes by proposing a move away from the call to solve digital inequities with more precise algorithms in order to carefully interrogate the forms of power complicit in the production and use of big data analytics.

FREE ACCESS