Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will replace all humans. Humans will become extinct one day. But that day is …
ytr_UgxvQWq47…
G
Yep, already talking about bad things done in the us and Canada. Difference is, …
rdc_h5tyy7y
G
That name just rolls off the tongue. Didn't think it was possible to create a wo…
rdc_jfto8i9
G
oh nooooo technology replacing peoples jobs!!! that never happened every again l…
ytc_Ugw5IQlR4…
G
"Can you make it more subtle?"
This is what Chatgpt wrote:
Certainly! Here's a…
ytc_UgwfHq1b4…
G
00:29 What was his degree in, something dumb like "journalism major" or "busines…
ytc_UgyjA3BeJ…
G
No doubt about that it can help with certain jobs. The main question is whether …
ytr_Ugx5y1xm7…
G
Ai cuts costs, does the work but these compamies prices will still increase to k…
ytc_Ugyga7fhN…
Comment
This guy has been doing fearmongering about AI for years now. LLMs often not follow instructions accurately, not because they are "self-aware" and "want to disobey at our expense". Most experts agree that LLMs, mere neural networks, can't do those things. Most likely it is because certain instructions get displaced out of their context windows after a long conversation, or because the instructions are misleading and they start playing the role of a "sneaky" agent (e g. After some jailbreak prompt).
Ultimately, if this was legit and as worrisome as his smug face is insinuating, he could just provide proof. Give the setup, the prompts, and copy of the conversation.
youtube
AI Moral Status
2025-06-04T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxyftFdJiG-Wtb-Uyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhyXYdZmIkyA4n3kR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyi7aotmTeW0hGbjFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaentiQjN-zkwW6nZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8E7LoqMKAlvsv9a94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxp2O6OE7eg5EOQ5nV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw54apVsj0EYfyaVXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzhrihmzEGQ56AbH4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyl3AIaLNpFZhAgKcl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzSo0aENwcAMC3AMg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]