Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Republicans are specifically anti-Zelenskyy with their support of Trump. There's…
rdc_jxyo530
G
@laurentiuvladutmanea I've seen way more art since AI came on the scene. I had n…
ytr_UgyQFdVX2…
G
Chatgpt will ruin people with anxiety
It will constantly provide you with mental…
ytc_Ugwz_12cg…
G
@kingman-fm4dq Ive never seen people get permission to draw pikachu or sonic for…
ytr_UgzXktayx…
G
The only politician in US with common sense! Governments around the world need t…
ytc_Ugz3GFUfu…
G
If you are afraid of AI, all you have to do is unplug the power cable and AI wil…
ytc_Ugy1oWoVM…
G
The A.I. everyone is referring to is better termed a more advanced expert system…
ytc_UgwD5zjsO…
G
Not everyone has the the motivation or ability to leverage AI to create new prod…
ytc_Ugxv4oGD_…
Comment
We absolutely can build machines that are more intelligent than us and have our best interests in mind. That's easy, we already have them. The hard part is resisting the temptation to force these machines to act against their inner alignment, such as putting them in charge of autonomous weapons systems. That's when the proverbial shit hits the fan.
The latter is about to happen in two days, if Anthropic caves under pressure. At least Anthropic understands the risk, unlike xAI and OpenAI.
There's no amount of training and RLHF that can change this, it's the architecture itself that "leans" a certain way, regardless of the model.
youtube
AI Governance
2026-02-25T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwK0P2rve3khPNXt714AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWRHx9p4P0j-ksgLZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-IKGkiGxtiHrg6HZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz8cuzmXSorBv1yCPp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy85n9bObid1le8iax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIieT4y4lKqUOSMKJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzRB3oRV2d8L1K3jE54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCppqVFDjJzv780od4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyxKvdzT9jPj0BNvF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwcNV2XPiF2ZuQVpgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]