Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So we are already in Stage 4 - Next is Stage 5 : AGI -someone suggested coding "…
ytc_UgweDRuAZ…
G
AI is replacing most of our jobs, RIGHT NOW.... Anyways, I hope you found this h…
ytc_UgzE5xX2u…
G
No I need to speak with real people it’s so hard to talk to someone real at xfin…
ytc_Ugwlbjb9r…
G
Instead of not being able to trust what is real, AI may finally expose that real…
ytc_UgwIIDgBi…
G
@AlienZizi Why the incredulity toward a simple question? Maybe you have other is…
ytr_UgyXEYq34…
G
@philatag I’m telling you now, the AI they’re talking about is just learning alg…
ytr_Ugxu_TIZo…
G
The median age of the Senate is 64, House is 57. Most of them are clueless abou…
ytc_UgxMhRvpw…
G
It wouldn’t have to be handwritten. Most colleges already have testing centers …
rdc_nu1no9c
Comment
Google reportedly had a line in their corporate policy book: Don't be evil. I heard that was taken out, and that's sad because evil doesn't need a legal definition. But sooner or later, the problem with all big companies is that, to paraphrase Ian Malcom: The lawyers are so preoccupied with whether something was illegal that they failed to see (or care) whether it was wrong. And so if ai ever causes an apocalypse, rest assured it will be legal.
But I like Neil's closing statement.
youtube
AI Governance
2026-03-30T09:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwCnFTfErlac_vkHf14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoYZLadxBC-5wHdpZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxEcDaQkJeaN1BjkQl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaqiL_Ia1KkOnxjth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJCZK6pfP2YF5cHsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzO4kNq6NN3sqO-BOV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYeFA9-6isaDGo0Ol4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyekVqINKGZF32cC8d4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwD61fBACZWPnXeowt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyvceEXJZKwT-zJ8sN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]