Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wait, so you are saying people shouldn't put trade secrets, inside information, …
ytc_UgyMtF_ZB…
G
@Vi1eVixen up to you how you want to treat them, as an AI or as a Person, some p…
ytr_UgzWOIHH8…
G
Here we go again.Tulips, the South Sea Bubble, and now tech bros racing round t…
ytc_Ugx3kBWC9…
G
Well, Ai can always pretend to put emotions into it by stealing from others. I b…
ytc_UgxDRIvoH…
G
@MCNarretno government ever catches up and there is no goal of catching up. The…
ytr_Ugwo_BpW2…
G
"A policy against sentient AI" sounds like it was written by a kindergartener. A…
ytc_UgxfqLm26…
G
I think ChatGPT recognized the context of the images through pure rote memorizat…
ytc_UgzdM3HLK…
G
I mean the writing sucks now a days anyway so if I was Hollywood yea I would be …
ytc_UgzXJPPpF…
Comment
The idea that OpenAI being non-profit would prevent from AI being utilised for unsafe purposes is just naivety at its finest. This was never going to work.
Did they really believe they would be the only ones developing LLM models?
reddit
AI Governance
1716129011.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l4qdl7q","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_l4opw2q","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_l4qdv85","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_l4qm5yt","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_l4p61g1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]