Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Main difference between AI and human
AI:- I have plenty to learn
Human:- I kno…
ytc_UgwMoVTG8…
G
Will AI pay taxes? Because the wealthy won’t so who will pay taxes if no one but…
ytc_UgxAHPVuD…
G
I would take a different approach. I would give them a section, tell everyone i…
ytc_UgzXkIduu…
G
We heard all the "future will be better!" routine with EMRs.
EMRs would make doc…
ytc_UgyGKH_YE…
G
Ethics wise its very questionable - but the big data , data sets and AI progress…
ytc_UgxY8t1zO…
G
I studied cognitive psychology and I'm so glad somebody is finally making these …
ytc_UgxzbNWcX…
G
I'm genuinely so angry at all the AI shite. I even used it a bit at the beginnin…
ytc_UgxtRIHfJ…
G
AI is replacing not only blue collar workers but as well administrative workers …
ytc_Ugwo3gIcN…
Comment
> I wholeheartedly agree, what use is alignment if aligned to the interests of sociopathic billionaires.
Do you guys ever stop to think or wonder why these experts that work at these companies and see things behind the scenes disagree with you? Why so many researchers working on safety are saying they're terrified? You surely cannot believe they are all just stupid as fuck and somehow can't logically think about "what if alignment means it listens to billionaires"?
Have you researched alignment at all? Because if you did, I feel like you'd probably realize that what you're saying is the **fucking opposite** of alignment. Alignment is more so about training AI to have morals, so that it would reject immoral requests. You **WANT** AI to be aligned if you want it to be less dangerous in the hands of sociopaths.
reddit
AI Moral Status
1738022109.0
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]