Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the big companies like Google should have an "opt in/out" of AI training…
ytc_UgzH0M59-…
G
My opinion on ai is that it should be used as a tool and not a salve, it's suppo…
ytc_Ugwh69x5h…
G
Well not exactly. I want to have a story. That’s what I care about the most.
I…
ytc_UgxzDt5bM…
G
there are so many possibilities with ai that its going to be insane and keep get…
ytc_UgyRY_x_K…
G
The expression of that manly robot is funny, when the speaker talks about " usin…
ytc_UgxFwJKOT…
G
Okay they plan on chipping those who will volunteer for the neuralink and others…
ytc_UgyQ9MsSR…
G
There's a huge difference between LLMs and "traditional" AIs that have been arou…
ytc_Ugz3VCthC…
G
I have found ChatGPT to be great at taking a Create Table SQL query and building…
ytc_UgwjA0WZ-…
Comment
Idk man. Humans are pretty unreliable and hallucinate all the time. Can AI really trust them with tasks like that? I wouldnt.
reddit
AI Moral Status
1770178339.0
♥ 223
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o3gr4dn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_o3gs5zt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_o3h8gdw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_o3h18m1","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"rdc_o3ik3uf","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]