Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Leave anything if it is strong enough why wouldn't it be able to control the ser…
ytc_Ugw2oEZP1…
G
ai engineers said ai work in a blackbox, they also did not really understand how…
ytc_Ugwq4ztuA…
G
what if like, the great filter is ai so like if you discover ai u are basically …
ytc_UgyD1Aozy…
G
The way he is using the music falls under fair use, the only reason ai isn't get…
ytr_UgxqdIeqp…
G
I don't blame them. Despite all the shit talking, AI "bros" are right. Like it o…
ytr_Ugz-gH9_l…
G
I remember getting a defensive response from AI one time. I first asked...Is it …
ytc_UgyUtujQb…
G
I actually ran their statement through chatgpt, out of curiosity, and it flagge…
ytc_UgxnBo8_R…
G
I have a question? Is it not chill to use a AI to give like word prompts for me …
ytc_Ugy173vjO…
Comment
People are out here saying giving a robot a gun is the start of the end, I’m out here saying creating robots in the first place is the start of the end
youtube
AI Harm Incident
2024-01-05T22:4…
♥ 222
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy07hyGB0pJod__Md94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyzvq7GtdSMeWXd1xJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5alMLIOPe5sAN3V94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwn5LwEsORtsedjd9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEmYc7AJ86YGzAgKN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyEQ-E6fJNRN5s54kp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHj6hv-qGE5sE3d614AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy0eGi6eyfGud8T5Bl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz34DIBmjm_14lY8ht4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx6mH1UC7UzStQQMxB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]