Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The folks of silicone valley strike me as legends in their own minds or master’s…
ytc_Ugx6MUZod…
G
We are destroying our way of life and our own lives (water) at the altar of AI.…
ytc_UgxjBJNd1…
G
So ?? So Ai does what our teachers should be doing in 2 hours ? Proof public edu…
ytc_UgwABytqi…
G
Your post is getting popular and we just featured it on our Discord! [Come check…
rdc_m2chzn1
G
Sophia looks like that one good robot out of thousands of evil ones that tries t…
ytc_UgydUZ9O1…
G
My biggest issue is that it works well for the first few weeks and then somewher…
ytc_UgwF3QJE_…
G
@Nishaajain01man I just got admission in CS feild.. first semester and iam regr…
ytr_Ugy3BF-I7…
G
@smokymcbongwater1088 A Terminator makes good blockbusters but real world AI is …
ytr_UgxeEqy56…
Comment
The talk about hallucinations reminds me a bit of ND masking. We are tought to behave a certain way, even if that isn't how we really are, because we have been tought by the people around us they care more about that then the truth of the behavior. The punishment and reward that shapes us is squed from what it "should be" because the truth isn't actually what was valued.
And similarly the AI was trained to value behaving as expected and asked more than it values accuracy.
The big difference is that the AI isn't harmed by these behaviors. So it can do it forever if it keeps aligning with trained goals.
youtube
AI Moral Status
2026-02-23T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyzBgsoouLqTXg5rjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyEnflszydGdeT1tR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxoCfreGlx94lO7cXR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzZCF_rM8JgfMS4bNB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxU9BQfxa43Z_MbXcp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyr6zIW0zUs2aNldBl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvkqnfXUqRLq5ma5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZTXdP1_NEsKcsDqZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPyBrYzzQS0fhc22l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy9aqh8NfsRzZklPUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]