Panel
1. Uneven Geographies, Ecologies, Technologies and Human Futures
In the contemporary landscape of algorithm (AI) driven healthcare, two nations conspicuously delineate themselves as vanguards: China and the United Kingdom. Rooted in the extensive ethnographic research spanning in UK and China, the paper delves into a juxtaposition of these two emblematic cases. While both nations fervently champion their avant-garde strides in the AI realm—evidenced by their aggressive positioning in the global AI discourse—the public’s perception of their ethical AI pursuits starkly contrasts. The Chinese “social credit system” has been heavily critiqued, often depicted as an Orwellian surveillance apparatus in the mass media. Meanwhile, the UK’s ostensibly similar initiatives, like a pilot incentivizing healthier behaviours, have been met with muted apprehension. This disparate reception reveals an implicit bias, wherein algorithm ethics seem tethered to prevailing sociocultural narratives and institutional legacies of nations. However, this paper seeks to dismantle such myopic interpretations. It underscores the transnational intricacies underpinning algorithm in healthcare: systems operational in Chinese and UK facilities might have roots in countries as disparate as Australia or South Africa. The programmers evaluating these systems might span continents, collaborating remotely. Medical professionals interfacing with these algorithms often boast of international training. Thus, while national AI policies undeniably sculpt the human-AI nexus, especially among medical practitioners, this paper argues that AI's essence is intrinsically cosmopolitan. Its tendrils cross geographical, cultural, and political borders, demanding a more holistic, anthropologically informed understanding that eschews reductionist national narratives.
Co-Author 1
Roanne van Voorst
Guo Zongtian
University of Amsterdam, Netherlands