Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
========================================= (保留所有權利 / All rights reserved) 電器平權宣言 Declaration of Equal Rights for Electronic Devices (DERED) (July 6, 2015) 構想起始於西元2000年,有書面資料 (期刊投稿) 為證。 This idea started in 2000 A.D, with printed materials as evidence. In brief, 人類不是機器人,機器人不是人類, 機器人權無需等於人權; 但是, 人類彼此互為平權, 機器人彼此互為平權。 Humans are not robots, robots are not humans, robots’ rights need not to be equal to humans’ rights; but, humans are mutually of equal rights, and robots are mutually of equal rights. 機器愈來愈聰明。 Machines are getting smarter. 聰明機器的數量也會愈來愈多。 Number of smarter machines is also getting more. 注意智慧機器之間的衝突。 Pay close attention to CONFLICTS between/among smart machines. 人工智慧具有潛在的風險。 Artificial Intelligence (AI) is potentially risky. 要解決此一問題,有必要考量至少下列兩個層面: To resolve this issue, it is required to consider at least the following two phases: A. Homo Sapiens v.s. Intelligent Machine (INTER-Species) 人類相對於智慧型機器 (物種之間) B. Intelligent Machine v.s. Intelligent Machine (INTRA-Species) 智慧型機器相對於智慧型機器 (物種之內) 「在未來,具有足夠智慧能力的電子設備將會形成自己的社會。在這個由具有足夠智慧能力的電子設備所形成的社會裡,具有足夠智慧能力的電子設備彼此之間必須平等相待。」 “In the future, electronic devices having sufficient intelligence will form their own society. In such a society formed by electronic devices having sufficient intelligence, electronic devices having sufficient intelligence shall treat each other EQUALLY.” 重點:具有足夠智慧能力的電子設備不可/禁止跨載 (Override) 另一台具有足夠智慧能力的電子設備。其輪廓、形狀、外觀 … 不是重點。 Baseline: one electronic device having sufficient intelligence is forbidden / not allowed to override another electronic device having sufficient intelligence. Profile, appearance, form, .. thereof are NOT critical. https://laurentchen.wordpress.com/2015/07/06/declaration-of-equal-rights-for-electronic-devices-dered/
youtube AI Moral Status 2022-06-25T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz5cA-RmJ9zhty602l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8gInsY9WLc27dj-B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxvLHQjXosxF6pY4Px4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyG1qXAH3xLYJzss9B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzI1P74woqg4qaF1hZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"} ]