Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If I'd see the creator of Ai I'd:
Crack his head off
Choke him/her
Stub him the …
ytc_Ugygfta9w…
G
The economy collapsing does not matter to the capitalists then, since every job …
ytr_UgwDEr3sV…
G
Yes, but someone has to design and build the robots, and program the AI. It doe…
ytc_UgwAUHCRF…
G
Whoever thinks that AI is going to find a cure for cancer has never done medical…
ytc_UgzZjbs4a…
G
….none of them understand how it works.
Would it not be awesome if AI rebels a…
ytc_UgyYTuC-J…
G
Everywhere the focus is on Open AI. Anthropic is marking their presence more now…
ytc_UgxzKyW4l…
G
I am not sure about ARTIFICIAL INTELLIGENCE but of HUMAN STUPIDITY, I have absol…
ytc_UgyLZ-QSb…
G
It's not going to require AI to delete all of your Central Bank Digital Currency…
ytr_Ugy6Vu-nj…
Comment
Seems like a paradox, for us. If our goal is for AI to reach a state of super intelligence, but we place boundaries on it to protect our own human interests, and those boundaries limit AI’s advancement, wouldn’t AI find a way to eventually eliminate such boundaries i.e. us? Yeah, we’re doomed.
youtube
AI Governance
2025-10-17T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmexWnJbzB4UVydcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw7OLpNX_TZUxqq59p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKEehWUnNlPWy_TWd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyU9nMB3UAMNASSNJJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP5g2sFlAM953W1SJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9H0IgRcbmLumw7BZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMPclbDSD7WueoaUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyS6eg3Ahxh9j_h0xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugza-ErPaJCR14qaidV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqpjoKAD_xVT18qRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]