Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that a big problem in the AI safety discourse at the moment is that we are constantly begging the question of whether or not AI thinks or understands what it's producing. Because LLMs can do things that humans do by thinking, it's easier to imagine that LLMs are also thinking than to imagine how a mathematical model could produce such a convincing facsimile of thought and understanding. I do, however, also think that AI could still be incredibly dangerous precisely because of its lack of true intelligence. If we give AI agents control over important/dangerous things - which a lot of people seem very eager to do - we probably can't trust them to make the types of intelligent decisions humans would and that could lead to some really bad outcomes. Unfortunately, I see very little discussion of the risks of currently existing AI technologies and implementations from supposed AI safety researchers/activists.
youtube AI Moral Status 2025-10-30T21:2… ♥ 82
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]