Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Frankly, I would trust a robot before anyone with power today. They are psychoti…
ytc_UgxDDHigG…
G
Hey can u get rid of these horrible AI generated translation, its awful to liste…
ytc_UgwJ2hiNW…
G
@mvd_01 what level it's at is irrelevant. you claimed that there are no self dri…
ytr_Ugx85xXV9…
G
To something about not and a society if that blows my mind see Yes you can get m…
ytc_UgwbJ8FTj…
G
I had a teammate in my group try to generate assignments with AI. It was really …
ytc_UgxmIseDw…
G
@ That’s a valid point, but even though some clients might try AI tools themselv…
ytr_UgzwyUJgC…
G
Not just voice cloning but deep fake filters. I know someone involved in a court…
ytc_UgwDOtViw…
G
We appreciate your engagement with the content. If you have any questions or top…
ytr_UgxJKaOY1…
Comment
At the very very least, these apps should be required to be programmed to recognize red flag language such as that related to self-harm, and it should absolutely be forbidden to create AI bots of real people without their consent. It seems like that shouldn't be THAT hard to implement?! If politicians wanted to, that is...
youtube
AI Harm Incident
2025-07-21T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzSuJ-dbXywVJjCSlx4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVjXkHQt47v6FcC9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzz2MYU3zVlGIyr1Pp4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzrv-vB9chD_4TAV9B4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHgiC1Ll8BO1a2-5x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyeEG9uISxeNMfccgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0CJwrqfVaTLVOTjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx3MHIolJ8CH6xrh2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVT7LnzfWsXpKPTYR4AaABAg","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwfRzjJ6JaHsvayt8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]