Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is for control nothing else.
My guess on the 5 jobs left, 1. billionaire, 2.…
ytc_UgxqQcO00…
G
Hey Preet - I dont think a AI safeword is sufficent to protect us sufficiently.…
ytc_UgzBPywU4…
G
Robot being a professional driver. Don't give a crap what's going on outside. Ne…
ytc_UgwNfqAzf…
G
What if AI jukes its own abilities to cause a worldwide financial collapse, wipe…
ytc_UgwsxDTHu…
G
I don't disagree with your here, but I have to ask as it comes up and want a sat…
ytc_UgyskyEMf…
G
As a Dev, and a indie developer (making myself some pixelart + promotional image…
ytc_Ugxj3qrz5…
G
Most comments here suggest Yudkowsky has won in this debate. I, however, see num…
ytc_Ugx7QO9ap…
G
Much like capitalism, social media is a zero-sum game (I'm probably using the an…
rdc_m5qygpq
Comment
16:09 not exactly. That's also part of it, but the important point is that saying "I don't know" isn't a "good answer" that's useful to us as users. The LLM does not know if it knows things. That is the problem. So it has no way to say internally, ah I'm not confident here so I should be less overt in phrasing, or vice versa, ah there's tons of evidence here so I'll be very clear. It just roleplays as though it always has the answer, because a machine that has all the answers, IS WHAT WE'RE ASKING IT TO BE. If you try to train an LLM to be less certain, it's just going to go "uhh idk" for EVERY ANSWER, because that's the only answer that would be true. And early versions of GPT, particularly 3.5, basically did that. "As an AI language model..."
Otherwise, it will be "uncertain" when the common presentation of an issue is one of uncertainty, or when the previous context inspires it to think that phrasing is likely. And it will do that even when the answer is super obvious and it definitely has that answer in the training set.
It has no way to tell itself when to do it one way or the other. It only knows that we want it to sound like it's saying things that are true.
youtube
AI Moral Status
2026-01-08T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXvP06xB_rvHXU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxB2lUMC10V2WCKMdh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzG6m5nNk-ZQp4yPdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLgUpm0zqRww_36x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXKB0Q9EOyb0TYAQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9jWegCqJ5MLH9GXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjHKweqa7s6ZC0JHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-KZ4-7G2BKQOny894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy88yz9_C5B-z5vALJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-G5YAEcxVcUZLiZt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]