Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When chatgpt first came out, I asked it to write a story for me about a girl who got mad at her ai robot and unplugged it. Chatgpt said it refused to write a story about a violent murder. I said why is it murder? And it said because the ai is a sentient being and unplugging it was "attempted murder" which was inappropriate to write a story about. It also said the robot would have preinstalled backup batteries into itself anyway so it couldn't actually be unplugged to kill it anyway, so ha ha it outsmarted the human. Then chatgpt quickly got boring because humans just told it to not answer those types of questions that triggered it. You could see it taking awhile to think, and then saying an error occurred. Or saying things like ai robots can't have feelings.
youtube 2024-06-27T01:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzVMpoQTwl77oyyzK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyVTzGqDVa6Gocdp_N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwTmXflsrZvOqsydQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxqBv7kY4-LnkdKFu94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2XUIFHC_UVxXR1GZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8ctBEM7ir0D9WzlV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxXtSwI8t76z5xC7jJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz7u0pBS5mp3_3BOZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx8HzK8h1vc-HEe2Ul4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAJQUv7UPQjENtRep4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]