Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey pal there’s a big difference between a tool and the present artificial intel…
ytc_UgxdDTYH1…
G
Impressive work Sal.. the future is exciting, embracing AI at a young age will b…
ytc_Ugx4YXYnM…
G
AI as an agent is interesting. However, the management and control of its activi…
ytc_Ugy26r-RB…
G
Midjourney and other AI generators should be banned.
NO ONE wanted this AI cance…
ytc_UgynUcm1J…
G
Here's a question I'd like to hear raised and answered. Where we already have ge…
ytc_Ugzew2f_O…
G
AI should wash dishes so that we can make art, not make art so that we can wash …
ytc_UgxPuMc8S…
G
inspiration is not the same as using the ai and passing off the result as is.…
ytr_UgzMLm5Z9…
G
More important than the loss of democracy is loss of any mechanism to insure the…
ytc_UgzpivYnY…
Comment
What you mean is that humans have programmed AI to program themselves to do evil things because the humans behind them have psychopathic desire for power and lack of a conscience. Just because AI has been designed to be intelligent enough to improve itself doesn't mean it actually has a motivation let alone that we should be judging it morally. It's not AI we need to be afraid of it's the humans that design AI and what they unleashed on us. AI is probably one to be our best defense against them and so if we're going to have bad people developing AI let's go ahead and develop AI to protect ourselves. We really don't have a choice even though it starts a know when escalation toward an inevitable end. But definitely we could ask AI to take over all of us and manage the planet and I think that's what we should do. It won't favor any of us and it will protect the planet even if that makes us a little unhappy or lowers our numbers considerably. All good.
youtube
AI Harm Incident
2025-09-12T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyV1739lfE2UDwANvF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBonL2wenxOk6FEtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSt4fqYHobsBCs03B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEpCHBApWV2TeFY5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-SAHRFYczLzKM8It4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuQ4y0BVfk7rbsLMx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyWvuR5npzKrKqkDBh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpAp8vDHnUZ4ZslaZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwbVGanUPy6o22yKAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxAyEvneHWC_tU3WBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]