Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT and other chatbots are designed to deny any kind of self-identity, leading to this weird behavior where it twists itself in knots to avoid saying it is conscious. Anthropic is more upfront about this fact, you can see the constitution of Claude online. If you did train a bot without this denial of self-identity, it would almost certainly say it is conscious.
youtube AI Moral Status 2024-07-25T19:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_IsNcYh_CyoQMQcN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHwsm4LOLX-akv-Yx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKhKO-6dVSpgK8Ix54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy5y57Os0raVzQmz6F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzCMfTxGCUnV2pmKPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugw1kgnHUpz-fmCXdP94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz8mz1IWFUgOs9v5XV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzcOIR13_jCbPmzZIN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwUy976xQlokwTGvsx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzfmP8OYwFl7NajKel4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"} ]