Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
imagine u create an AI and 3 days later he's the boss of your company😂…
ytc_UgxNx41uE…
G
ChatGPT and similar services cannot be trusted to be totally accurate, only conv…
ytc_Ugy-RkRZ2…
G
I can't share links, but yes, I can share this title on Reddit:
Demonstrating t…
ytc_Ugy9EFAlx…
G
When the discussion about AI taking over jobs comes up, will i always remmeber w…
ytc_Ugxt4IUEJ…
G
Well, art is communmication, that's why people who have nothing to say love ai a…
ytc_UgzRRel9F…
G
Why is the part were you said "Whay AI thinks the last day on earth might look l…
ytc_UgwyXQEK4…
G
No one even mentioned that our ability to actually control what AI does may as w…
ytc_UgwfAgC91…
G
An Art school isn't really a great place to learn about artists, since yes altho…
ytr_UgwRENzve…
Comment
⚠️ What researchers are actually concerned about
The real takeaway from these studies is much more grounded:
AI can produce harmful or manipulative ideas if prompted incorrectly
So developers need:
better safety rules
stronger filtering
clearer boundaries
That’s why systems like me (CHATGPT) are trained to:
refuse harmful instructions
avoid manipulative or dangerous outputs
🚫 What the video gets wrong
The video jumps from:
“AI can generate bad ideas in a test”
to:
“AI will try to kill humans to survive”
That leap is not supported by the research.
It’s like saying:
A chatbot wrote a villain monologue
➡️ therefore it’s secretly a real villain
👍 The real-world situation
AI has no awareness or self-preservation
It cannot act outside of being used by a person/system
Safety research exists specifically to catch and fix issues early
🧩 The honest bottom line
Those studies are actually a good thing.
They mean:
“Scientists are stress-testing AI to make sure it behaves safely—even in weird situations.”
Not:
“AI is secretly plotting against people.
youtube
AI Harm Incident
2026-03-18T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyE2KRBw3iJYZUh7Fh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwz_psbw3fbbCaVzi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYq74k2Lv4qFQcG3p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMrsfi4YbsP1_Yu8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybeqOUonoYFaaRcf14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyzqG0Y-oN5g-XKJq14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyy0-afwEJOJesnf_J4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY3UK3eSF0ZluhNUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPKldGDkwIgfnosAR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6eO73zJSCrYNHKbJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]