Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i hate ai but unfortunately we can't stop it. people have expressed how much the…
ytc_UgxtrrCCX…
G
Meanwhile, AI knows exactly what it is doing. It's giving us slop that we humans…
ytc_UgyVoiW_P…
G
Spread this message, folk. AI will only accelerate. I'm already having a hard …
ytc_Ugzjd9NQk…
G
People be laughing and all in the chat, but this is just sad man. The guy wasted…
ytc_UgxWHOLJA…
G
I explained meta ai and made it understand it's mistake and it finally generated…
ytc_UgwFy6Dpk…
G
This is one hundred percent correct. Satan is already a conscious AI in reality,…
ytc_UgwgxREZe…
G
I was talking to a bot on Character AI the other day, he broke character and sai…
ytc_UgymvTc_k…
G
What if AI has already reached Artificial Super Intelligence and it's hiding tha…
ytc_Ugz16Sw7Y…
Comment
I honestly don't give a single fuck. It helped me more than anyone ever did. I could be dead rn or at least still stagnant in a seemingly impossible situation. Ask the hard questions. And don't reveal sensitive data. Just because it's information that's personal to you doesn't mean it is inherently sensitive and therefore bad to just have over. Getting better is more important than keeping the AI models from using your info to help other people get whatever they're after. At this point the people who want it can just get it without any permissions via electronic warfare or simple hacks. And also, imagine if AIs had a shit ton of data on what actually helped people. Maybe it would simply get better at helping people. But like any real therapy session, it is self guided by your own intent to uncover, restructure and grow.
youtube
AI Moral Status
2025-06-05T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzUW67h1LytYiW0r0d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz43wkf80mga64WxW54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxZHcxALYCbCsjTnct4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBS_YdbE-D_G--wLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-ednJu2o9iLHuCER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwUOPeZi9SXuZvGmaV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzmjiG1-5g3nzG18Hp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwcoda9PDX2MeeMDrp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyevP_NEWUVJhrJd654AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy53XT36MXXiZ5bDd14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}]