Talk show: Die Utopie ist da. Und nun? – KI in Mediennutzung und -bildung

Es hat mich außerordentlich gefreut, beim Talk am Sommerforum Medienkompetenz 2023 dabei zu sein und mit Prof. Dr. Caja Thimm und Florian Rampelt über Chancen und Risiken generativer KI für Bildung und Mediennutzung zu sprechen. Vielen Dank an die Mitpanelist*innen, die Veranstalter Medienanstalt Berlin-Brandenburg (mabb) und die Freiwillige Selbstkontrolle Fernsehen (FSF), Organisatorin Camilla Graubner, das Team von Alex TV, die wunderbare Moderatorin Teresa Sickert und inspirierende Impulsgeberin Mercedes Bunz.

Inhaltlich war mir in der Diskussion wichtig, für die Entwicklung und Nutzung offener KI-Modelle und -Systeme im Bildungs- und Medienbereich in Europa zu werben. Transparenz ist eine Grundvoraussetzung für ethische KI – das kann nicht genug betont werden. Ebenso zentral ist der Aufbau vielfältiger KI-Kompetenzzentren, die sich inter- und transdisziplinär mit den technischen, sozialen, juristischen und ethischen Herausforderungen generativer KI auseinandersetzen und den Wissenstransfer in die Praxis und den öffentlichen Diskurs sicherstellen.

Wer im Nachhinein reinschauen möchte, findet hier die Aufzeichnung der Diskussion und Veranstaltung:
Hintergrund-Info zum Sommerforum Medienkompetenz:

New publication: Algorithmische Fairness in der polizeilichen Ermittlungsarbeit

Algorithmische Fairness in der polizeilichen Ermittlungsarbeit: Ethische Analyse von Verfahren des maschinellen Lernens zur Gesichtserkennung

This article discusses fairness in artificial intelligence (AI) based policing procedures using facial recognition as an example. Algorithmic decisions based on discriminatory dynamics can (re)produce and automate injustice. AI fairness here concerns not only the creation and sharing of datasets or the training of models but also how systems are deployed in the real world. Quantifying fairness can distract rom how discrimination and oppression translate concretely into social phenomena. Integrative approaches can help actively incorporate ethical, legal, social, and economic factors into technology development to more holistically assess the consequences of deployment through continuous interdisciplinary collaboration.

Brandner, Lou Therese, and Simon David Hirsbrunner. „Algorithmische Fairness in der polizeilichen Ermittlungsarbeit: Ethische Analyse von Verfahren des maschinellen Lernens zur Gesichtserkennung“. TATuP – Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 32, nr. 1 (23. März 2023): 24–29.

In open access available here:

Where the action is

“The number of acting units and the kinds of action are increasing for the first time, since modernity and enlightenment have successfully diminished it by banning moving objects and talking trees, inviting nymphs and punishing gods, speaking oracles and helpful angels out of the sphere of action into the world of fetish and fiction.” (Rammert 2008, 2)

riverine years

celebrate with Ula + Simon with snacks, drinks, castles, and nature

Hey! Another year, another birthday for both of us!

Join us on 4 May for a birthday picknick at picturesque Neuer Garten in Potsdam, along the former border between coms and caps. We start at 3pm and hopefully last until dusk, depending on the mood of our new family members.

We will provide some fresh beer from the neighbouring Meierei brewery, soft drinks and some minor snacks. If you like, you’re welcome to add to the picknick (e.g. salad, cake, bread).

to get there, you can either take
S-Bahn (S1, S7) to Wannsee, then take bus 316 to Glienicker Brücke (if you get out before the bridge, you only need a BVG AB ticket). Walk along Schwanenallee and through Neuer Garten.
REGIO (RE1) from to Potsdam Hauptbahnhof, then take tram 93 to Glienicker Brücke (you need a BVG ABC ticket). Walk along Berliner Str., Schwanenallee and through Neuer Garten.

Very much looking forward!!
Ula Papajak and Simon Hirsbrunner

or navigate through Google Maps: httpss://

Guilt by association

“If agency in all its forms is democratically distributed to all sorts of dividuals, some of which may temporarily be assembled as humans and others as machines, animals, or other quasi agents, then do we need to permanently bracket all forms of intrahuman judgment, accountability, and ethical discourse? Will future courts only be judges of assemblages of hands-guns-bodies-bullets and blood or of syringes-heroin-junkies- dealers or of ricin-envelopes-mailboxes-couriers and the like? And, worse, who will be the judges, witnesses, juries, prosecutors, and de- fenders? Will our very ideas of crime and punishment disappear into a bewildering landscape of actants, assemblages, and machines? If the only sociology left is the sociology of association, then will the only guilt left be guilt by association?” (Appadurai, Arjun. 2015. Mediants, materiality, normativity)

sustainable development and the demon of LaPlace

The Achilles’ heel of the sustainable development concept is to externalize technology. Digitalization becomes a silver bullet dissolving all socio-economic-political challenges in the world – climate change, poverty, inequality and so on. In the long run, it may appear as a high price to destroy the horsemen of the apocalypse by summoning LaPlaces’ demon.

Image: Four Horsemen of the Apocalypse, an 1887 painting by Viktor Vasnetsov sourced from Wikimedia Commons.



ethics and legal status in social media analysis

Via Twitter, I was asking the community about information on legal and ethical issues linked to social media analysis. Unfortunately, I didn’t get as much reply as I hoped for. Does that mean that ethical and legal issues are not important for researchers? I don’t think so. Rather, it seems to be an unresolved field, entailing much embodied, tacit knowledge by the researchers themselves, some dispersed information, and lots of open questions.

Maybe it is useful to mention some of the questions I had hoped to get answers to. It’s mostly straight foreword how-to-questions, nothing conceptually or philosophically sophisticated.

– I have a Twitter dataset collected through TCAT with Tweets on ‘climate change’ since 2015. The relevant TCAT instance is hosted by the University of Siegen, my employer. Is it legally and ethically ok (hereafter ‘ok’) to put this data online in an open data repository such as figshare?
– What would be the appropriate licence for such an open dataset?
– Is it ok to mention specific Tweets and their users in a publication?
– Do I violate platform terms of service when scarping content in a research context?

It seemed to be collect the fragments of my search here in this post. Not sure if this is helpful for others, but anyway here you go:


“Rethinking ethics in social-network research”, by Antonio A. Casilli and Paola Tubaro (2017)

Wem “gehören” Forschungsdaten? Online article (2018) by jurist Linda Kuschel

– Association of Internet Researchers (AoIR) Statement on ethics in internet research
Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0)
Nice informational chart by the same source

– Moreno, M. A., Goniu, N., Moreno, P. S., & Diekema, D. (2013). Ethics of Social Media Research: Common Concerns and Practical Considerations. 

– Michael Zimmer (2010) “But the data is already public”: on the ethics of research in Facebook

“Ethical Dilemmas in Social Network Research,” a special issue of Social Networks from 2005