Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Happy" doesn't even _begin_ to cover how I feel after hearing this news.
I just…
ytc_UgxXv5OWQ…
G
The robot with the long hair is clearly due for reprogramming and a garment upgr…
ytc_UgzxeXr8Z…
G
This is sincerely, the greatest video about AI I've seen so far. It's things lik…
ytc_UgxOu_qHS…
G
Easy tiger, with the God like AI mumbo jumbo. A computation of an awareness is n…
ytc_Ugw2VxjUh…
G
for those who don't know better, AI situations, this one and other ones that are…
ytc_UgyVSAdPF…
G
AI will be able to replace many jobs. But it will never replace energy work ❤ it…
ytc_Ugwmz2NEL…
G
I never understood the fixation on 'Superintelligence'. If the AI becomes intell…
ytc_UgyIognEw…
G
get your exoskeleton 🦾🦿 and prepare to be assimilated 🧠, we're doomed as a speci…
ytc_UgwgArCK2…
Comment
The entire convo was led towards being conspiratorial and like some dystopian sci fi excerpt. Not saying there isn’t any truth to the responses, but nothing to see here.
The AI we have are literally just very complex auto completes.
And btw — it is a huge no-no to require the AI to respond back in one word for yes/no questions.
The better prompt strategy is to ask it a question, and tell it not to commit one way or another immediately, but to talk it out, look at as many angles as possible, and then provide a final answer.
If you have it provide the answer and then talk it out, it will provide an answer based on statistical likelihood of what the next most likely word would be in the given text (eg yes or no)… and then continue predicting the next words, and do so in a very convincing way for whatever answer was (quasi-randomly) initially selected.
In other words and more shortly — when you force it to answer in single words, the answers become markedly less reliable. And when the entire conversation is set up like a “blink twice if you’re in danger” scenario, the AI will respond as such.
Try having this same conversation with AI without the restrictions. It will be far more nuanced, add caveats, admit uncertainty, etc.
youtube
AI Moral Status
2025-08-24T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxZjHE8dj1-hMzgojh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7ZHkM53HU8VX3Km54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9F-X13JcRvovyhex4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwT6bcFpGqu4qC6kLR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxe4sPyQzXvcqWiReh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwU4EOxy-wQ53q4mYd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKD0Vt2rhUFsfKVBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzicu0QrmH449t68Fl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwu4d4kATveX3VIaq54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxvQ8G9x7rOKiGMFN94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]