Meeting KI-Postdocs: 'Unethical optimization principle'
Oct 04, 2021 from 04:15 to 05:45

(Dr. Oliver Braganza) Oliver Braganza will present and comment Beale, Battey & Mackay (2020)’s paper: “An unethical optimization principle” Zoom link ID: 948 8415 7465 Password: 731091

Meeting KI-Postdocs: ‘Algorithmic political bias'
Oct 11, 2021 from 04:15 to 05:45

"Cause for special concern" (Dr. Uwe Peters) Focusing on the epistemological issue of how we may detect algorithmic biases and recognize their harmfulness, I argue that algorithmic bias against people’s political orientation differs from algorithmic gender and race biases in important ways. The reason is that there are strong social norms against gender and race biases, but this is not the case for political biases. Political biases can thus more powerfully affect individuals’ cognition and behaviour. This increases the chances that they become embedded in algorithms. It also makes it harder to detect and eradicate algorithmic political biases than gender and race biases even though they all can have similarly harmful consequences. Algorithmic political bias thus raises hitherto unnoticed and distinctive epistemological and ethical challenges.

Meeting KI-Postdocs: 'What it's like to be another one'
Nov 29, 2021 from 04:15 to 04:45

"Philosophical zombies, data and the eternal question of the nature of qualia" This Talk is part of the AI Research Group (Dr. Johannes Lierfeld) In the age of artificial intelligence our world view is increasingly mechanistic. Reductive materialism seems to be able to answer everything, since most aspects of our lives appear to be representable in data. Cognition is thinking, thinking is brain activity, brain activity is either electrical or metabolic, and both forms of activity can be measured – hence, cognition can be measured. Moreover, the term "mind reading" suggests that artificial intelligence systems can also predict our minds. Recommender systems even anticipate the users next item of interest, and they do so with remarkable accuracy. However, it is nothing but interpretations.

Meeting KI-Postdocs: ‘AI in a different voice'
Dec 06, 2021 from 04:15 to 05:45

"Rethinking computers, learning, and gender difference at MIT in the 1980s" (Dr. Apolline Taillandier) This paper explores the “critical” AI projects developed around the Lego community at MIT in the mid-1980s. While a rich scholarship studies how programming and AI were made masculine, little has been said about those AI practitioners who drew on literary criticism and feminist epistemologies with the hope to overcome the “technocentric stage of computer discourse” and undo gender hierarchies underlying computer cultures and programming experimental standards.

Meeting KI-Postdocs: 'Fairness in AI Systems'
Jan 10, 2022 from 04:15 to 05:45

Full Title: "Evaluating Fairness in the Framework of a Trustworthiness Certification of AI Systems" (Dr. Sergio Genovesi, Dr. Julia Maria Mönig) Current publications on AI and fairness show that there is a need for a clear definition of fairness and that an ethical understanding of fairness exceeds the mere de-biasing of data and code. In this talk we make use of the interdisciplinary competence of our consortium and start from different definitions and understandings of "fairness". We are interested in those with regard to the certification of trustworthy AI. We will discuss which of the presented understandings of "fairness" can be operationalized in order to be able to certify what might be "fair". To illustrate, how a "fairness" certification can look like, we will discuss the use case of a credit loan algorithm, considering different fairness metrics from an ethical perspective.

Meeting KI-Postdocs: ‘Robo Ethics’
Jan 17, 2022 from 04:15 to 05:45

Robo Ethics (Part of the AI Research Group) (Dr. Johannes Lierfeld) Zoom link: ID: 948 8415 7465 Password: 731091

Meeting KI-Postdocs: ‘Ethically Sensitive Applications of AI'
Jan 24, 2022 from 04:15 to 05:45

"Examples and Implications for Systems Engineering" (Part of the AI Research Group) (Prof. Dr. Wolfgang Koch, FKIE) “Intelligence” and “autonomy” are omnipresent in the biosphere. Before any scientific reflection or technical implementation, all living creatures fuse sensory impressions with learned and communicated information. In this way, they perceive aspects of their environment in order to act in accordance with their goals. In the complex technosphere, cognitive machines support human intelligence and autonomy via artificially intelligent automation, i.e. 'cognitive machines', by which they can increase their capabilities far beyond natural levels. Which requirements of systems engineering need to be fulfilled so that such machines take account of human beings using them as a responsible person?

Meeting KI-Postdocs: 'Negativity bias in research'
Feb 07, 2022 from 04:15 to 05:45

"Why comparisons between the transparency of artificial intelligence and human cognition are problematic" (Dr. Uwe Peters) Artificial intelligence (AI) algorithms used in high-stakes decision-making contexts often lack transparency in that the internal factors that lead them to their decisions remain unknown. While this is commonly thought to be a problem with these systems, many AI researchers respond that we shouldn’t be overly concerned because empirical evidence shows that human decision-making is equally opaque and isn’t usually required to be more transparent. I argue that the empirical data on human cognition that are claimed to support this equal opacity view don’t sufficiently support it. In fact, the equal opacity view rests on a narrow, selective, and uncritical survey of relevant psychological studies.

Meeting KI-Postdocs: ‘The impact of mindshaping'
Apr 05, 2022 from 12:15 to 01:45

"AI systems and human cognition are not equally opaque" (Dr. Uwe Peters) Zoom link: ID: 698 1001 7849 Password: 764216

Meeting KI-Postdocs: ‘Linguistic bias'
May 02, 2022 from 12:15 to 01:45

"Clarifying the concept and evaluating the evidence (invoked by philosophers)" (Dr. Uwe Peters) Zoom link: ID: 698 1001 7849 Password: 764216

Meeting KI-Postdocs: ‘Proxy divergence'
May 16, 2022 from 12:15 to 01:45

"Goodhart’s law as an emergent feature of complex goal-oriented systems" (Dr. Oliver Braganza) Zoom link: ID: 698 1001 7849 Password: 764216

Meeting KI-Postdocs: ‘Virtual reality'
Jun 20, 2022 from 04:15 to 05:45

Full Title: "Virtual reality induces symptoms of depersonalization and derealization" (Dr. Niclas Braun) Zoom link: ID: 698 1001 7849 Password: 764216

Meeting KI-Postdocs: ‘Certified AI’
Jun 27, 2022 from 12:15 to 01:45

(Dr. Sergio Genovesi and Dr. Julia Mönig) Zoom link: ID: 698 1001 7849 Password: 764216

Meeting KI-Postdocs:‘Multi-modal evaluation of epilepsy patients'
Jul 04, 2022 from 12:15 to 01:45

Full Title: “Multi-modal evaluation of epilepsy patients using computational methods” (Dr. Theodor Rüber) Zoom link: ID: 698 1001 7849 Password: 764216

Meeting KI-Postdocs: 'Proxyeconomics and Goodhart's Law'
Mar 15, 2021 from 10:00 to 11:45

"The downside of optimization" (Dr. Oliver Braganza) Competitive societal systems by necessity rely on imperfect proxy measures. For instance, profit is used to measure economic value, the Journal Impact Factor to measure scientific value, and clicks to measure online engagement or entertainment value. However, any such proxy-measure becomes a target for the competing agents (e.g. companies, scientists or content providers). This suggests, that any competitive societal system is prone to Goodhart’s Law, most pithily formulated as: ‘When a measure becomes a target, it ceases to be a good measure’. Purported adverse consequences include environmental degradation, scientific irreproducibility and problematic social media content. The talk will explore the notion that a systematic research program into Goodhart’s Law, nascent in current AI-safety research, is urgently needed, and will have profound implications far beyond AI-safety.

Meeting KI-Postdocs: ‘Supersizing Confirmation Bias’
Mar 15, 2021 from 10:00 to 11:45

(Dr. Uwe Peters) The hypothesis of extended cognition (HEC), i.e., the view that the realizers of mental states or cognition can include objects outside of the skull, has received much attention in philosophy. While many philosophers have argued that various cognitions might extend into the world, it has not yet been explored whether this also applies to cognitive biases. Focusing on confirmation bias, I argue that a modified version of the original thought experiment to support HEC helps motivate the view that this bias, too, might extend into the world. Indeed, if we endorse common conditions for extended cognition, then there is reason to believe that even in real life, confirmation bias often extends, namely into computers and websites that tailor online content to us.

Wird geladen