Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And I think what I can do is inovate those which or may changing them with repla…
ytc_UgwjtZxu4…
G
Bloomberg is laying off due to AI as well.
Not a Musk fan...but everyone using A…
ytc_UgyvhcX4G…
G
AI needs anthropomorphic robots to take over. James Cameron nailed it with the "…
ytc_UgzvSXAMf…
G
Yeah, because you can’t AI just anything.
Good fucking luck having a regulator…
rdc_n9mj34w
G
The problem is that for many people around the world, protecting endangered anim…
rdc_deuf3bm
G
today's ai are not intelligence models at 20% of human level but only automation…
ytc_UgzKLMVot…
G
I have a low opinion of chatGPT after using it for a while, but it is great for …
ytc_UgwvAIanq…
G
Disabled artist here, saying that Ai art is “good for us” has got to be the most…
ytc_UgwyVC6RV…
Comment
Serious answer to this question: the emotional depth that an LLM can connect with people is astounding. That makes it primed for abuse. Think of the misinformation and manipulation that goes on with advertising, social media campaigns, subtle slants in newscasts to get people to act against their own self interest. This can amplify that a thousand times. Nudging the weights of a model through selective training can and will have real societal effects. Now I don't *think* that's happening yet, but who knows. But without some kind of regulation around transparency of training and a population that is intentionally training to watch for cognitive leading (LLMs do this by design, but can also suggest ways to spot it and manage it) and amplifying biases, we may go down a very troubling road.
reddit
AI Moral Status
1743842859.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mlig3f9","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_mlihpze","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_mlisduj","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_mli2bj0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mlhsvtx","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]