Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would like to see AI be illegal. I think it will just be used to hurt people.…
ytc_UgzTVDjio…
G
Wow! What an excellent way to make certain your arguments are well polished. Arg…
ytc_UgxiwzvMR…
G
Have you ever tried drawing on paper
Yes I love it I but I think ai is "solace"…
ytc_Ugyz3xC09…
G
well you could ask AI to do 30 iterations (within one hour of time) with differe…
ytc_UgxpUUd33…
G
imagine being the creator and invite of AI, putting it out to the public then w…
ytc_Ugzu0e8PG…
G
Musk has "no moral compass", who created OpenAI to be open, while Altman might h…
ytr_Ugw-EEPfv…
G
And that’s ai started itself a war and bombed the art community and bombed the w…
ytc_UgxoYG67U…
G
This type of horror is why advocates have been fighting to ban facial recognitio…
ytc_Ugwr6S_hy…
Comment
Although it's AI, the possibility of this happening in real life is not low. Robots like this should be utilized more in the manufacturing industry than anywhere else
youtube
AI Harm Incident
2025-03-27T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwy6yZHphMVq-pdVFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzG7cMD8jSFXBz2Fd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugypkgj2EHxXP98zqUp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgztVfnrMG3z1HYDV7N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzRvjwOU3WdjjtLbMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxO8b4OX_y6hYva0mN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxboz7jpsP7keWXxe14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwItWPZV72evBggCtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgziZKYJMrTR3FNmkpB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugz3cbA26M-8zel7FEp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]