Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As they said, humans are programmed to survive through evolution. If you give an AI a different goal, like protect your passengers, it might kill itself instantly, if chances are the highest to reach its life goal. But if you really give an AI the goal to protect itself first, unfortunate events might occur. ^^
youtube AI Moral Status 2018-05-13T19:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugzh1wdOOHKk7AEvEOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyMQpRfgepJs_b43f14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzZ3yd0xcUfATtKCuh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxoH3CHkZZ3Q4iGr-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzagFJ9PYFKvkOqdEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzQLosjNyCDhYjepBR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbqgiIjw8rlWcmH2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6GL-VcARNPVudB354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4f-Au3qIAvy45JPt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymqWaWzHC3NxHIoU54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]