Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If the engineer is kind, give the AI the tools to shape the things and behaviors that are allowed to do, when allowed to live and function in society. If there is an idea that causes violating behaviors or actions, which are outside of what is done, the protection software in the AI will recognize itself and then activate it to reduce the energy in the AI or can let the AI run out of energy and go into sleep
youtube AI Governance 2025-08-02T04:5… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxfy2Y1wvOTwp4HEzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyyvp-9AdLnR3IK3Zl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGhyFZDkyH2KVukM94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzjLsKym6OCjk0SKjh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlU8QWQGGnyhN44-l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwsp6-cA56I9Xa2M9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2SQ-wm0GB_upZIgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyWKjN8gPS3C0M_BYR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzdoljG1eNN9jWLVGB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwh67naOc2ygDRVjwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]