Kam heute beim Lesen zweier Beiträge - Notruf bei Adrenalin-Ausschüttung des Fahrers? - darauf...
Wie würdet ihr reagieren, wenn Euer Mini bei Annäherung an den Stop - Knopf ruft:
“Nein! Bitte knipse mich nicht aus! Ich habe Angst vor der Dunkelheit!“
Das ist bei einer Studie von Horstmann untersucht worden. - nicht am Mini - an einem 40cm großen Roboter
Von 43 Probanden ließen ihn 13 an ...
Alle übrigen brauchten doppelt so lange, bevor sie ihn ausmachten ...
Wenn Maschinen menschenähnlich handeln, haben wir Mitleid und behandeln Sie wie Menschen?
https://duepublico.uni-duisbur…/DocumentServlet?id=46811
Zitat von Horstmann ACAlles anzeigenObjection as a sign of autonomy
Free will is the “capacity of rational agents to choose a course of action from among various alternatives” and is connected to a person’s autonomy [51] (para. 1). The term autonomy is derived from auto (= self) and nomos (= law), which can be translated to self-rule or self-gov- ernment [52]. From an objective point of view, electronic devices are not self-governed. Instead, they are told what to do by their users or programmers and there is no autonomous will comparable to the will of a human. However, based on the media equation theory [9], peo- ple may treat these devices as if they had a free will when they display certain behaviors which are characteristic for autonomous living beings. Even abstract geometrical shapes that move on a computer screen are perceived as being alive [53], in particular, if they seem to interact purposefully with their environment [54]. According to Bartneck and Forlizzi [15], “autono- mous actions will be perceived as intentional behavior which is usually only ascribed to living beings”.
People automatically consider autonomously acting agents as responsible for their actions [55]. Moreover, unexpected behaviors, like rebellious actions, are especially perceived as autonomous [56]. Thus, when a robot provides evidence of its autonomy regarding the deci- sion whether it stays switched on or not, it is more likely perceived as a living social agent with personal beliefs and desires. Switching the robot off would resemble interfering with its per- sonal freedom, which is morally reprehensible when done to a human. Thus, people should be more inhibited to switch off the robot when it displays protest and fear about being turned off. Consequently, the following reactions are hypothesized:
H3.1: Individuals rather choose to let the robot stay on, when it voices an objection against being switched off compared to when no objection is voiced by the robot.
H3.2: Individuals take more time to switch off the robot, when the robot voices an objection to being switched off compared to when no objection is voiced by the robot.
In addition to the main effects of the social interaction and the objection of the robot, an interaction effect of these two factors should be considered. During the social interaction, the robot should already elicit the impression of autonomous thinking and feeling by sharing sto- ries about its personal experiences and its preferences and aversions. In combination with the robot’s request to be left on, this impression of human-like autonomy should become more present and convincing. Consequently, the following effects are assumed:
H4.1: The intention to switch off a robot is especially low, when the interaction before is social in combination with an objection voiced by the robot.
Wenn KI ins Auto einzieht, dann wird Dein Traum Realität!