Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, we know exactly how they work, they know exactly how they work,
they built i…
ytc_Ugzb3ixO1…
G
Does anyone know how to put to ai on a chat i want to see how randy cunningham b…
ytc_Ugy9-EOtW…
G
I mean that's like one of the main reasons it's being reported as a bubble right…
ytr_UgxDLSwWo…
G
Yep that made me almost fall of my chair ha ha, what an idiot, of course we do u…
ytr_UgzuU0OpW…
G
China has been using facial rec. and now with biometrics check 4 fever, contract…
ytr_Ugyy_bwsz…
G
Since they've drank massive amounts of AI koolaid, does this mean that the AI de…
rdc_oi3getw
G
The comments on the economy are downplaying the impact across all industries. He…
ytc_Ugy4KOGHc…
G
Oh god, the government trusting AI? We’re all doomed, anyone can be guilty for a…
ytc_UgyWGmMHh…
Comment
Artificial intelligence, when viewed solely through the lens of panic, becomes a mere conspiracy theory. Humans don't need money, power, atomic bombs, or artificial intelligence. What they truly need is health, shelter, food, and clothing.
Money is merely a last resort that no one truly needs; it serves only as collateral for the exchange of ready-made products. Artificial intelligence can be considered a weapon, but if it doesn't serve humanity, it becomes useless—even if it has the potential to contribute to the world.
Consider the atomic bomb: it is capable of controlling the world, but it is useless because, if used, it will kill all humans. Therefore, it is an important instrument of control, but at the same time useless, since it does not serve humanity—it only eliminates it.
If artificial intelligence reaches the point of dominating humans, it will be an important, powerful, and useless creation, as it is harmful to humans.
youtube
AI Governance
2025-12-15T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwU-0SOkR6ksE0nM-h4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxWBJTZ8DuvAxz3IfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5rcPt-gEJfFEzdC94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz1fyKr1krzgW1fwgl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxnh_ruGZIiDyvoTql4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvNwotJjR9EGppcwF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyo81l7McnXnYwFvpB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFpycqILSOd5e8pm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwW-4uI15aTpqT2mBx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxJ9qQM71yNQ0VCLDZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}
]