Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of these days an AI is gonna get sick n tired of humanity and meditate under…
rdc_dlgpysb
G
"this is how my ai message summarizer summarized a breakup text sent to me" if y…
ytc_UgycoBld8…
G
This whole segment was so frustrating. Krystal and Saagar are just buying into t…
ytc_Ugy5llsUC…
G
Yeah but all the companies that save money from automating with AI will just poc…
ytc_Ugy1pWxyE…
G
I mean, i dont like ai art either but i dont see so called "soul" that u ppl obs…
ytc_Ugwd4p5Dk…
G
We really don’t have to worry about overpopulation for several factors that I’m …
ytc_Ugwn9gfLp…
G
this smart guy thinks that AI is going to wipe out humanity but bitcoin will rem…
ytc_UgzsSSaDs…
G
If people want to stop using LLMs, fine. But let’s not pretend that’s going to m…
ytc_Ugx7HN_wz…
Comment
Okay, so what's the disastrous risk in me asking chatgpt to draft a heartfelt letter to my friend telling him to suck it?
Instead of waiting for an answer, I'll tell you why-
It's not about risks to society, it's about not offending anyone, period. No wishy washy bullshit about misinfo will ever change that
reddit
AI Responsibility
1682551567.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jhsxp3o","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"rdc_jhtett3","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"rdc_jhtlguu","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_jhunqol","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_jhu9yw4","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"})