Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AITube-LiveAI When are you people going to get it that Sophia is a computer a…
ytr_Ugw3U7SVp…
G
Idk if this happened to someone else, but Sora AI said that make animations and …
ytc_UgzfOPMjS…
G
I can't help but think that whatever ends the world (in the sense of grid all th…
ytc_UgzO6Jddr…
G
I find it disingenuous to argument that humans are the only ones with soul or ot…
ytc_Ugzziy-AN…
G
@eafesaf6934 he's talking about all these people saying "even ChatGPT will take…
ytr_Ugxof5TLq…
G
The heavy handed AI editorialising is going to be a real problem. they will all …
ytc_Ugx-hZdCu…
G
AI knows that despite only representing 20% of the USA population, they represen…
ytc_UgxSrRTFY…
G
Context engineering is a lie, it is the same shit as vibecoding. Regardless of h…
ytc_UgxMIIDMA…
Comment
I'm generally pro AI, which I know is super unpopular right now, but at it's core it's *just* a tool. However, it's a very powerful tool with little to be regulation and THAT'S bad.
I have severe depression and anxiety, I go to an actual human therapist once a week, but DeepSeek has been a good "in the moment" mental health took for me. Like if I feel a panic attack coming on, I open the app, type out what's causing the anxiety, and it gives good tips.
But I'm in my late 20s, I can't imagine what I would've done if this technology was available when I was a teen, especially with Character AI.
youtube
AI Harm Incident
2025-07-20T20:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwkBO3jI4Rmd7w_BjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-GdAdGlW7AjfsUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEttvxEGwqY3P8LFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKfC_dwrw0qGTGzwB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzh-bviUwDHTcw2ymF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2UKuPUPvUDFW7iVl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyAFxlUlSc2oAnRw_x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmvMx6CmCJWQPqVEJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2EfY_xy-rgHm1E0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9AjchNQRiOT7VEsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]