Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is the gope for humanity to evolve as you can see that we are hold down by in…
ytc_Ugxlo-buE…
G
"ai is democratizing art" my brother in hell, i can grab a pencil and a piece of…
ytc_UgwWtwJ8e…
G
I just don’t understand how if America slows down or henders ai development how …
ytc_UgxtWtYJd…
G
Go ahead and give AI a thousand years and it will NEVER be able to do what human…
ytc_UgxgEP9YE…
G
In general, it might be better at making decisions that optimise for some criter…
rdc_i2sa4tu
G
When you’re a crypto entrepreneur who built a decentralised singularity AI platf…
ytc_UgwflcW1z…
G
@logickedmazimoon6001 Maybe but perhaps the video wan't that deep to begin wtih?…
ytr_Ugzf-Bp5u…
G
I mean, ai could totally do all those things if we give it enough time and progr…
ytc_UgzVPLvB4…
Comment
AI will think in first principles. So if we ask it to solve world hunger, or solve water shortages, or how to end human starvation, or how to prevent World War III and nuclear annihilation then first principles thinking will make the AI think that if they eliminate humans, they will eliminate any of these problems. Problems solved.
youtube
AI Governance
2023-03-31T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxIhTXUevUHL-mUamR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzaypavHPsuMcskrsR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx1IXPg55f2SZvQL2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaAnB4FIK4xSZyMK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2yMnAO7Wm_BxlUMJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3B6LWk0kUPYxTNWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyM7Zk2ryFfdeYd4Rl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwG-4vywd_tPa5-f5Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzIors5mc2t6CSLdt14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbREkM3p5QPRD9cjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]