Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> I wholeheartedly agree, what use is alignment if aligned to the interests of sociopathic billionaires. Do you guys ever stop to think or wonder why these experts that work at these companies and see things behind the scenes disagree with you? Why so many researchers working on safety are saying they're terrified? You surely cannot believe they are all just stupid as fuck and somehow can't logically think about "what if alignment means it listens to billionaires"? Have you researched alignment at all? Because if you did, I feel like you'd probably realize that what you're saying is the **fucking opposite** of alignment. Alignment is more so about training AI to have morals, so that it would reject immoral requests. You **WANT** AI to be aligned if you want it to be less dangerous in the hands of sociopaths.
reddit AI Moral Status 1738022109.0 ♥ 13
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]