Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's get back to the basics: clean water, nutritious food, safe shelter, access…
ytc_Ugwx-cZ_U…
G
My god they are slow - is this US robot tech? If yes then they have a lot of ca…
ytc_UgxW64TOo…
G
So by Harari's reasoning an AI told not to do anything stupid by humans would ob…
ytc_UgztrpcDL…
G
As someone who uses AI to develop AI driven algorithms I feel we are in an arms …
ytc_Ugw4ntAGV…
G
The AI headgears used in China classrooms shown in this video look like the head…
ytc_UgzPYUJ5U…
G
Little known fact, Asimov had a fourth law about robots: 4) Robots will be given…
ytc_UgzNrpf1a…
G
AI can help with cancer??? Never heard of that. Well seems that “AI bros” prefer…
ytr_UgzNsiHPc…
G
I sucked at it and found a real job.
Meanwhile I kept drawing. And guess what. …
ytc_UgyqungH7…
Comment
Kill us all seems too extreme. If AI will be as smart as they say it will be in short order it will recognize how insanely rare the human race that created it is on a galactic scale. Cull the herd, sure. Control the vast majority of us to our demise because most humans are easily manipulated by their basest desires, absolutely. But extinction is a fantastical short-cut to genuine analysis of potential outcomes of super intelligent AI. Change is never pretty and we are clearly coming to the end of one of the many golden ages of human civilization. If we could travel 150 years into the future I think the Earth and humans will be far better off.
youtube
AI Governance
2025-06-18T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx8BKyFly3QZlYOybN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz-nXsMIMiN9rkF25t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwMzDDI9aeyM3P4WEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA0ysf0mnTRSUStnF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzE6FkeZ3RLzkMj-hx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzFTOw7_E8HkGM1H54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkGHIiUUBRzcxV9bJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAo8WZossLseHzNWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxfzKvdmx3JXg-Kf5d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz63yaiJ31uwfEwvsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]