Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First problem. The AI revolution isn't like previous automations.
It may be. If …
ytc_Ugw2Rvjzf…
G
AI can fix or maintain things. Only humans can. Sorry guys. The world will still…
ytc_Ugy72Nf0_…
G
If many people lose their jobs, we need to start thinking about a universal basi…
ytc_Ugx0MyITz…
G
If these terrorists technocrats want AI to replace human jobs it means they want…
ytc_UgwVj5Xfr…
G
The CEO does this on purpose, he knows the ads make people angry. Said its dyst…
ytc_UgwEbm2pK…
G
Because they were the ones who are pushing for this sh1t, Ai basically means, yo…
ytr_Ugz5wCcwF…
G
@MrMojoRisenx tl;dr, robots and ai are code made by humans and cannot be sentien…
ytr_UgxgwiAVk…
G
Before personal computers were a thing, long before the internet, people working…
ytc_Ugyu4FAyF…
Comment
Huh? This is weird. None of this is how LLMs work. They don't think. At all. They make predictions based on our thinking. They simulate our thinking, our reasoning. If this is how people think of LLM-based AI we're hosed.
youtube
AI Governance
2026-02-07T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuSEuDMEC0tjiJE9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTCxNwGq_lgqDh3Kx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAx2Qpr6NczK02Snl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNxLLZ3dY_a9Gt6Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzinWPBk9jl7p9eQvZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxX6Vu6aMBrr_4GDv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdCq2DGG1FE1ibytx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwWW2Il3p8Faim5A6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzsHTOyhAvQG85uZP54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx849mvh7WM2eMtkQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]