Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Steven asks very concise questions.... I wonder how his Team, or he himself, com…
ytc_Ugw4Hx8ar…
G
Anyone can take your art style. It doesn't need to be ai. In the end, you can co…
ytc_UgyFm7LCa…
G
And our president continue to put huge amount of capital to those AI sharks.
A…
ytc_UgyDh-hbJ…
G
Do- …do you guys think this is real? Holy shit we’re fucked if this many people …
ytc_UgyETXEBt…
G
Great video. Very clear. AI is not art. It's tech. If you want art, hire an arti…
ytc_UgyrvA2ko…
G
@Jupa As much as I’d like to agree, plenty of artists just never get exposure re…
ytr_UgwXj1cUJ…
G
nope nope nope nope, I dont want that I cant tell if its a human of a robot. if …
ytc_Ugi7GZAMM…
G
It's not the AI that is a problem, it is the people in positions of power who ca…
ytc_UgxbOCjn7…
Comment
I don't know how obvious it is to a lot of people. So much of the overall conversation about AI and mental health seems to be whether the act of using AI itself is unhealthy or not. One of the hot [posts](https://www.reddit.com/r/ChatGPT/s/kAkpne2hQb) yesterday was a PSA about why it is, repeating common points with a reaction of "what the fuck" and "this is appalling." A lot of posts about the topic have really only pointed out why AI can be harmful, why it isn't good for therapy, etc. I haven't seen many people point out why people turn to AI beyond that, but I don't go on Reddit too often so I could be wrong.
reddit
AI Moral Status
1754767842.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n7todyq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_n7tueww","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_n7u3dwr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n7tvelw","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_n7t39bf","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"})