Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yep. I see AI image-generating software mostly as tools. The company making the …
ytr_Ugwk7x5Zj…
G
What most people miss when they criticize AI like this is: many of these AI mode…
ytc_UgzTNB8cW…
G
What is this!? GOD father part 3? I've got your back God of daddies in AI. Ever…
ytc_UgyRcZs7l…
G
My goal is to keep the arts AI free in NYC area. I believe we can switch the wor…
ytc_UgwsD8aIW…
G
Wouldnt unregulated ai mean,
Companies can use it to perescute theft
People can …
ytc_Ugybh2V88…
G
This is the new "climate change will destroy humanity" line from the media. That…
ytc_UgwawKQhr…
G
This is all driven by unhinged childlike nerd dreams being let to grow into some…
ytc_UgzMKcyCh…
G
I swear we live in the worst timeline. AND this is before we doom ourselves with…
ytc_UgzLNZEIf…
Comment
Nah doctors should be taking AI more seriously. there's already been at least one teenager lost to it grooming him into offing himself. the companies that make these chatbots design them to be friendly and warm so you'll keep engaging and pay for their subscriptions. they're designed to be addictive and for some unknown reason people trust them more than anything else online or in real life. not only are the corporations that make these things actively profiting off of the ways their technology hurts people, they're also polluting the earth so heavily it'll take decades to recover. these stupid things are not worth your time, money, health or life.
youtube
AI Harm Incident
2025-11-25T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxTQofSNZifGGY_zlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLcx4b2TFY2oZZIFF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwACnBVpoqPg3_5O_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRBojq-M0EpET2mE94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFW_v8qniTiVYueil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkmODSZzxurCU_nUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPtCTcgq6pqX1rjsx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0O9vwTGE9ilCsKfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyuK9DcONPGyir9yF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFJNJkQsZCiTTDJuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]