Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Obviously the answer to all this is 42. The problem is we don't know how to properly instruct it. Too few directives sends it off in its own direction. Too many puts it in a conflict. That is where we need to be focusing our study on not worrying about an AI becoming evil or malevolent.
youtube AI Governance 2026-03-30T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxNngpzvlmoRsOWrnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwLMkER-lkYjrrp7yF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxNG4cAC3mORElpNj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwED9KricIAGGO1qDp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugzn4WCcvlKv4jpjab54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmEyWsObVRtGMM0bx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlccIz6vK7aFrNf3J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxjdl0NlaaLlMwkfV14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzIVvK1exYgvem-Vpt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx6PQjYL3RWe12Qex94AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"outrage"} ]