Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Again, your example of an AI misinterpreting a request to protect your children is not compatible with the idea that it is super intelligent. If you told a human to protect your children and they went out and destroyed all other forms of life, you would consider them to be a complete idiot. It is obvious why that would be bad, but also why it would actually hurt the children not protect them.
youtube AI Governance 2023-03-30T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzwD6Wp3FGOC1k42hd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzqhf7uWSU6IkmJPpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZyhZBunwvN2zJj1l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx5v1KyCad7gOsEuT14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2CXobofVqco1aikV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxWyKVgkLVDREXz0jh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyRAcP3-osLT2MLvS54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxXb5eT79mwO21SuxN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxkn3L6S39tMYDAlDN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzm87-lGrqI37qAZ8V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]