Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bruh hope Elon gets Open ai unlike some idiots who cant fix the fucking chatgpt…
ytc_UgwanJpQA…
G
What an amazing interview.. It doesn’t matter who creates super AI or Super Inte…
ytc_UgyqOyMtM…
G
Self-driving (Supervised) means YOU are still responsible for what happens. Whet…
ytc_Ugyq-hjlU…
G
Anyone who is even slightly involved with AI knows that we will have a huge prob…
ytr_Ugz5wtENy…
G
So, one of the main problem with AI is, you can't really give access to an AI to…
ytc_Ugwf6XTjA…
G
She’s right but that’s how media work, they create celebrities, we will hear wha…
rdc_fanu4be
G
Shhhhh.... no violence, only love. /s
To be fair and trying to stay away from c…
rdc_f1z6xjx
G
This is how to scare companies into spending a crap load of money on AI…
ytc_UgzVbKBO0…
Comment
The world of AI should scare everybody. For the first. Every new AI model is smarter than the previous model with factor 10, so it is not a small step for every development. But a significant one each time. Soon, if not already happened, the AI models prompt themselves. No human intervention is needed. We humans are clever, but not so clever that we really know what AI is doing. The only thing we developed was the learning algorithms for the AI. The rest is done by the AI itself. And we do not know how that's work. If we do not " wake up" of this first AI dazzling moment in time and see what AI really is and where its going. We are in for a rollercoaster ride that will be a true horror.
youtube
AI Harm Incident
2026-01-01T18:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxC2P_CvZvMRxlIgKB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzOs7sXZtMB3Lro4BZ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzjjmvMzzcCFxydIyt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykU_Y9WU5vmaX1X9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxd35jMJFZSV818LtN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNqy8IPOApIDqHJRd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzA2KxLfAF6EqTYD3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzc9oORCPl0NkSSYEt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzx-IsBJLt7ZnFz-zd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyornDsxeBTlN00yLl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]