Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai art is just tv dinners of art. like technically it's art, but it was made by …
ytc_UgzUD2bim…
G
How does mankind, even today, treat “lower life forms” that threaten its surviva…
ytc_UgyO8ZH7I…
G
AI should need to be more developed, this kind of animation project is cringe. F…
ytc_Ugz_DapOf…
G
The only things I use llms for are quick facts (what is food additive 671?), ex…
ytc_UgyMOJdCG…
G
Instead of trying to have millions of self driving cars, often with one passenge…
ytc_UgwYKJ1ag…
G
The only jobs I can imagine a robot will have trouble doing is: Working with hor…
ytc_UgznrFf1i…
G
i'm more worried about how stupid people are getting and how hopeless they sound…
ytc_UgwZ-cwzf…
G
Capitalism is hilarious
15:43 And there it is. The reason for AI art. It's to a…
ytc_Ugx5yNKid…
Comment
I don't know if they do something like this already for AI but can they write some sort or empathy code? or have the AI be responsible to care for something? kind of like a therapy animal helps people with many different emotional or health problems? Perhaps it could learn how to 'care' instead of just concentrate on negative human behaviours like greed or war. I don't know exactly how this would work but maybe it's something to consider
youtube
AI Harm Incident
2025-08-28T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzlY7ncqzAQ4e2Xic14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyxT1sGjezj5CYzPp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwr3BADmTETQpjXLKB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyoHQTURuSl4AIskL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy8qUinDUECMPHhATB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyDajt7x3XOcbI_w3l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylY1z0JTgLxAaRezV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzPCg9uJGG1yhkdmjh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwyAXqmkhoZ8jsQHTJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBiSJg482AX8nH2CJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]