Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
About your final remark, you said you think intelligence is a difference of kind rather than quantity, and I kind of agree. I just don't think AI _needs_ to be "super intelligent" in many fields at once to do irreparable damage to humanity. Right now these bots are no where near as intelligent as a smart human in any field, and they are still doing immense damage (although at the point most of this damage is directly tied to human greed). I absolutely believe that if a super intelligence takes over the world and decimates humanity, it will be very stupid. Not that a smart model wouldn't destroy humanity, but a dumb model that is capable will come first.
youtube AI Moral Status 2025-10-31T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyB3qDfe7Pekbn4lkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw__6VrpVrvfyQd5_F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwZjAhzsZKAu_WNT4F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwNkWa1Pql9-S1tVz14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1sIaLBBU5DXk6ibV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxUNuN878cbERbK3f54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWpYbJcsAc93iBj1V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugykc_Uk3JDO-6cjlGF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwN7UuLaDPFxKmxux54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyUoj4XF68gseUsoqZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"} ]