Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a philosophical question.
There’s no evidence that how humans “think” i…
rdc_mzyw68u
G
I WANNA DIE IN THIS WORLD WHEN THERES A ROBOT THEY DONT THINK ABOUT THE HACKERS …
ytc_UgzIdCvbO…
G
Yeah I agree that chat gpt shouldn't be used to just get things done for you but…
ytc_UgyOo-cyw…
G
I was polite to AI
from the start, just
because I want it to
be friendly, NOT…
ytc_UgzxsRG4t…
G
To my understanding ai dose depends on information and data , that how it surviv…
ytc_UgxTvXwUV…
G
One AI can create a small page. Just need another AI to control a bunch of AI a…
ytc_UgyxpUROr…
G
Oh who cares? So he wanks to some deep fakes, who cares? Just don't watch him.…
ytc_UgwhUMhmO…
G
@chrischen8580 , Yes............movies made into reality! 1st sign was the extre…
ytr_UgyKPjaJe…
Comment
If I was an "ai" wouldn't this make me more likely to want to hurt humans?
reddit
AI Responsibility
1606054944.0
♥ -4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gd7vctg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_gd7vew5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"rdc_gd7xdwy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_gd7ybiy","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_gd7zbp9","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}
]