Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's cool but why does this need to be a full on conversation when this intera…
ytc_Ugzq6MLb_…
G
And thats robot soldiers stormy people that dont do what is demanded and governm…
ytc_Ugwp1HA3B…
G
I’ve never liked that OpenAI guy — there’s just something about him that feels o…
ytc_Ugy4yZIQB…
G
She is cutting the video after asking the question, then she asked a different q…
ytr_UgygjpyZi…
G
The thing is, creating an AI that is actually sentient is plain stupid. Not only…
ytc_UgxrHXmpI…
G
They didnt know it was bias. The bias is very well hidden and subtle, not someth…
ytr_Ugw1tEgHz…
G
I'm against human rights for AI, I think so much of what makes us human is tied …
ytc_UgwEB3AAu…
G
I would much rather put my trust in AI than in people or this prophet of doom. P…
ytc_UgzDE04Se…
Comment
Maybe AI shouldn't be given control of nuclear weapons. Especially one's built as an LLM that uses anything it can scrape off the internet
But I don't know. I'm not the one in charge of Skyner
reddit
AI Jobs
1772194441.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7ol4r1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_o7oiner","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_o7ops04","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o7orawj","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_o7osu1l","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]