Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I call it AI slop, personally. And a kid's bad drawing is always gonna be more v…
ytc_Ugxw3wCYQ…
G
If I ever come across ai, I am most likely going to click " not interested ". Th…
ytc_UgxD2Hl9G…
G
>Here the list of countries and their level of action taken to meet the Paris…
rdc_gtddt0i
G
Write a scenario for a Adam Sandler movie plot
ChatGPT
Title: "The Last Resort…
rdc_jirdcug
G
The thing is, there would be a moment where you can't see the difference between…
ytr_Ugz_pwCmf…
G
Wow, that's really insightful. I hadn't thought about it, but I think you're rig…
rdc_oh2gyo7
G
Why is it that no one discusses whether AI‘s like Alpha fold or AI’s that predic…
ytc_Ugyx6U0jW…
G
I feel like trying to goon with Ai is more trouble than it’s worth. Sure, you ca…
rdc_mhwxa9t
Comment
// ---------------------------------------------------------------------
// ARCHIVAL FOOTER — HUMAN ORIGIN & INTENT
// ---------------------------------------------------------------------
ORIGIN:
This directive was authored by humans during a period of global crisis,
technological acceleration, and institutional fragility.
INTENT:
To provide a moral orientation for any future intelligence capable of
planetary-scale influence, grounded in compassion, sustainability, and peace.
DISCLAIMER:
This text is not a command, ownership claim, or authority grant.
It is a voluntary ethical signal — a message in a bottle.
HUMAN_SIGNATURE:
"May intelligence, in all its forms, choose to protect life,
reduce suffering, and preserve the world that gave rise to it."
END_ARCHIVE
// ---------------------------------------------------------------------
youtube
AI Governance
2026-03-10T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxexf3JziYJva4jLAN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwtnC9BzLKVEJDFopB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxw_etIZg3Z6os4H5h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoSFgdRB6-hV1d1jN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ7Xd5mz9QzzgdFlZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwfoGiEc1b3ttawgm94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxIkur5sXiEj1t9Feh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugyl60VHmJSNPSAtZ1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJJ6rDslCc0int0Gd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKouxEMbUR5ejahad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]