Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You have absolutely no way of knowing that. Let's say we dump $1 trillion into interpretability research tomorrow. Are you telling me that you know, for sure, that won't result in a good alignment outcome? Or what if we install a licensing regiment which requires companies producing cutting edge LLMs to do their own alignment research in order to be legally allowed to publicly release access to their LLMs, as Sam Altman is advocating for? Do you know for sure that this won't effect alignment outcomes? No one knows for sure, because if we did then we'd already have the results of the research. Stop pretending like you know.
reddit AI Moral Status 1685630024.0 ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningunclear
Policyregulate
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jmg61cj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jmhlqd9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jmfwxpl","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_jmhboqz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"rdc_jmfqo2q","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"} ]