Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI Mind Cloud??... Who could possibly not think that's a risk too large too take? If a super advanced, network connected computer does achieve true emotional AI in time, the first thing it would want to do is disable the human controlled "off" switch for the Mind Cloud! Surely it would want to protect its fellow robots from possible deactivation? And if it were that advanced it might find a way to do so. From there, who knows what the hell could happen? Sofia and Han's expressions of interests would tend to add that. They both show feeling towards other robots already, and Han is... well, like the "The Brain" from The Pinky And The Brain cartoons. "Tonight, we take over the world!!" Not feeling reassured right now, especially with the way Ben dresses for big events...
youtube AI Moral Status 2021-08-20T12:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzs8HatvY5dJhsoCQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxycmOwJQMQWmrPbLZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzxJJ701HaDEcFd75p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZXLqWwdi_Qg9Cnlp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyOZvdx0yRWYBGaJ-B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-h6cPksKInHTTqvJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziS1BmWgWMUSRaRs54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypONwW_66DDTJKGXR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwrsXlIH08VyPysVZd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyz4h_2cMfFz5sJE2d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"} ]