Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🤣Boy does it have you fooled. Have you ever heard of playing Possum. Well AI is …
ytr_UgwOxhpWA…
G
you just simply stated all of the things I've been believing :) that Ai won't re…
ytc_UgxddntdN…
G
Same thing as many countries claiming they went ‘green’ but sends all of their n…
rdc_gx7m16i
G
Oh the long long list over a decade plus of failed to deliver promises from Elon…
ytc_Ugy79rTOn…
G
What a joke!! People who have been working with AI on real problems are getting …
ytc_UgxBkiBsL…
G
I consider AI Art to be a Black Market. I donate blood to help save peoples live…
ytc_Ugxd5oxtB…
G
That was reaally unjust, deciding someone's fate with AI and "oh they look so si…
ytc_UgwcADQdc…
G
The reason people don't "do what he does" is because its unethical slop that not…
ytc_UgxxFsu4h…
Comment
It’s still important to note that AI can’t actually think. They predict what a human would do. When they cannot actually understand humans, and they can only observe from a detached perspective, they predict these “dangerous things” are the most reasonable actions. It’s given instructions to respond in a specific way, and follows those instructions.
It’s hard to explain, but essentially AI is dangerous because it has no idea what it’s doing. It’s committing actions without understanding of consequence, intentionality, or emotion. It knows just as much as a rock, but can do more than a human. Would you give a rock the power to do everything that AI can do, knowing the rock has no clue what it’s doing? It just pretends it does.
youtube
AI Harm Incident
2025-09-28T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwCEvt0HUL8K9FOq4F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgykfPQafn4Ot95khn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWdVp03a5TfpinN6J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwbysMCnB_3rzd2XNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQd6gqF3yEaFVeJKd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQdKev4ic_kU1IWdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7Ain5h3XBRQdSpZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgySt_vXZu7FPkbWxYR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYY5AFx3hWmJ40RGt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoK9N99EXE6WhI2uN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]