Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there is no such thing as aI .... a computer cannot program itself.... end of …
ytc_UgxRSIfgN…
G
The question is: "Will anybody be so eager to be driven by car?" "Will be anybod…
ytc_Ugx4vmQ78…
G
They're gonna cry when they get replaced by AI too lmao. I shouldn't care about …
ytc_UgzU4V5fV…
G
Dont worry, we will all be rendered useless, including all the precious c-suite …
ytc_UgyaUucS5…
G
This a great example of how dangerous AI is... gathering 'thoughts & emotions'! …
ytc_UgwX3GFZy…
G
AI can’t never be as smart as man, it’s a another great achievement for science …
ytc_Ugx4Dco6u…
G
I hate tesla, I hate electric cars, i hate automatic cars,....it's always a manu…
ytc_UgyVDnCf_…
G
I only ask this as a normal question, what if it was the other way around? would…
ytc_Ugw_6dFPZ…
Comment
You have absolutely no way of knowing that. Let's say we dump $1 trillion into interpretability research tomorrow. Are you telling me that you know, for sure, that won't result in a good alignment outcome?
Or what if we install a licensing regiment which requires companies producing cutting edge LLMs to do their own alignment research in order to be legally allowed to publicly release access to their LLMs, as Sam Altman is advocating for? Do you know for sure that this won't effect alignment outcomes?
No one knows for sure, because if we did then we'd already have the results of the research. Stop pretending like you know.
reddit
AI Moral Status
1685630024.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jmg61cj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_jmhlqd9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jmfwxpl","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_jmhboqz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"rdc_jmfqo2q","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}
]