Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You don’t need the computer to be superintelligent to be dangerous. It doesn’t need to be self-aware or anything. In fact, I think a “dumber” AI that is programmed to accomplish its tasks regardless of the consequences without being able to understand the impact of its actions is even worse. And that’s what we are capable of building right now.
youtube AI Moral Status 2025-10-30T19:4… ♥ 26
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzlNS7h6F8yzYvzSyJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzRtPT5FtYtVnhQMr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzlLChLDho2DZmM6hJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJ11a3gGSNO0nYdlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwV58WTvHOgo-2Fg254AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyB2Y4vkhlSl-Jzq5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy03CnsV188SKGRnIp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbtSwOPU84WWS6Txd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKjykBOQ2trpS-78l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzAT3zD70G2CGdh6hh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"} ]