Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Although I am pro-AI... I think this is fair game. The AI companies stole the en…
rdc_oi1tq9o
G
We need to build apps around the AI Act and GDPR in order to be compliant. So as…
ytc_Ugwr38qn6…
G
The problem is that thisis happening without any citizens real consent here in t…
ytr_UgwZ2zWi1…
G
In a very convoluted way, the AI successfully gave an idea for real artists to i…
ytc_UgwZr5Ejn…
G
I was *just* discussing this with my wife, but in the context of current bots th…
ytc_UggJr8-UN…
G
A monkey who took a selfie was found to own the rights to photo. I think we shou…
ytc_UgzLmC581…
G
Kids, adults, a lot of people and their dogs "talk" to LLMs to seek validation f…
rdc_mvk96u6
G
So basically AI will just inbreed itself with shittier AI arts in various sites,…
ytr_Ugwwhmm7b…
Comment
to everyone talking about how expensive therapy is: there are SO MANY OTHER OPTIONS than AI. stop using AI for therapy PLEASE. it’s going to harm you in the long run. it’s been known to give people psychosis, genuinely, because all it does is reaffirm whatever you input. there are cheap books online or in stores that are guided and written by therapists that have exercises and skills. there are free call lines and text lines that are open 24/7 that you can use for anything, not just a crisis, any time you need to talk to someone. there are support groups online and in person that are completely free that have people in the same boat as you. AI is the lazy and extremely unethical way to go about it. stop it. use your resources.
youtube
AI Moral Status
2025-06-05T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzUW67h1LytYiW0r0d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz43wkf80mga64WxW54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxZHcxALYCbCsjTnct4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBS_YdbE-D_G--wLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-ednJu2o9iLHuCER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwUOPeZi9SXuZvGmaV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzmjiG1-5g3nzG18Hp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwcoda9PDX2MeeMDrp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyevP_NEWUVJhrJd654AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy53XT36MXXiZ5bDd14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}]