Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@TheJokeKiller Only after warning him 40+ times. He shouldn't have even been on…
ytr_UgySlIy3o…
G
If the governments, bankers, corporation leaders (or whoever is in charge) did s…
rdc_cthnpmn
G
I may be completely wrong, but just to share my perspective; I do believe AI is …
ytc_UgzG8n-1n…
G
The Tesla autopilot switching off a second before a crash only gets to show you …
ytc_Ugz7w8Aw3…
G
I don't really understand , but you can't say you made something when an AI did …
ytc_UgyV87ira…
G
I've always contended computer technology, AI etc is only as good as the nearest…
ytc_Ugzp32OH0…
G
Hasta ahora lo que veo es una version sofisticada de muñeca inflable. Evidentem…
ytr_UgxN2zSU0…
G
Perhaps the danger is not that we will be wiped out by AI. Maybe the danger is …
ytc_UgxKroFBs…
Comment
Any superior intelligence will look upon humanity as a destructive force on earth.
The only winning move is not to play.
Of course governments around the world may regulate AI heavily, but that will not control those who's intentions are to revolutionise their immediate surroundings.
I believe we are past the tipping point where either we kill the planet or AI destroys humanity.
youtube
AI Governance
2025-08-28T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwEcsd5cSbweTVUir94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxIBjkmF-dzRwB1l7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziSPTsbS8fa_Ma5ex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrTWSU-YTNuN7eK594AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxWfy_DHW5tu2CET94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVyMvz_Uwkm8OeSj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxzJy8d5iwk1FOH9Dd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwI7jatMJhxopV6t1h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgynWi93kP-me27BjAJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwQSnG-1095Vj28WPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]