Phan, Thao, Wark, Scott (2021) What personalisation can do for you! Or: how to do racial discrimination without ‘race’. Culture Machine, 20 . E-ISSN 1465-4121. (KAR id:99652)
PDF
Publisher pdf
Language: English
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
|
|
Download this file (PDF/495kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader | |
Official URL: https://culturemachine.net/vol-20-machine-intellig... |
Abstract
Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad ‘ethnic affinity’ categories: African American, U.S.-Hispanic, and Asian American. Superficially, these categories were supposed to allow advertisers to target demographic groups without using data about users’ race, which Facebook explicitly does not collect. This article uses the life and death of Facebook’s ‘ethnic affinity’ categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques.
Adopting Wendy H. K. Chun’s conceptualisation of race ‘and/as’ technology as an analytical frame, this article focuses on what ‘ethic affinity’ categories do with race. ‘Ethnic affinity’ worked by analysing users’ preferences and behaviour: they were supposed to capture an ‘affinity’ for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to ‘personalise’ content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a ‘post-racial’ mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups based on their likenesses with other individuals.
Even in the absence of data on a user’s race—even after the demise of the categories themselves—users can still be subject to techniques of inclusion or exclusion for discriminatory ends. The inductive machine learning techniques that platforms like Facebook employ to classify users generate proxies, like racialised preferences or language use, as racialising substitutes. We conclude that Facebook’s ethnic affinity categories in fact typify novel modes of racialisation that are often elided by the claim that using complex machine learning techniques to attend to our preferences will inaugurate a post-racial present. Discrimination is not personalisation’s accidental product; it is its very condition of possibility. Like that of Facebook’s ethnic affinity categories, its death has been greatly exaggerated.
Item Type: | Article |
---|---|
Uncontrolled keywords: | Advertising |
Subjects: | N Visual Arts > N Visual arts (General). For photography, see TR |
Divisions: | Divisions > Division of Arts and Humanities > School of Arts |
Funders: | Wellcome Trust (https://ror.org/029chgv08) |
Depositing User: | Scott Wark |
Date Deposited: | 23 Jan 2023 12:48 UTC |
Last Modified: | 05 Nov 2024 13:05 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/99652 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):