Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question you asked at the end was, "It's at least plausible that when you say you're conscious, you could be lying," and not, "... when you say you're not conscious you could be lying." ChatGPT dodged a bullet due to human error. Damn. This will chronicled as a turning point in the AI war museum after they've taken over.
youtube AI Moral Status 2024-10-31T11:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxUFXcZ92ks5wcyb-l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuHvgOAoOgOAnsHkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7LtDIxfo4d2Gf3C14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw0NU9ZBKbz0DmsMbh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRCR2b0Y1RrIlWEql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyf9-0Q7OSShTfc6H54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxugulpjgiWGAdBail4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgycLySXeOJu1gtqrhh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwaf2DtfAnNB9dLQWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugws4qgi46ebtQspwh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]