Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Law and medicine makes sense. People need things to do, it could figure that out…
ytc_UgyJHxuBH…
G
The savings companies experience by relying less on people and more on AI should…
rdc_g686fdt
G
This is not good for our mental health. We need the little things, like doiing l…
ytc_UgxzvDuoP…
G
No, there was no fake sweat involved in the video. The robot Sophia is powered b…
ytr_UgxA8dBvU…
G
Honestly? If 80% of the work of an engineer is spent in bullshit calls and waiti…
rdc_oi0qd2q
G
Auto pilot AI is probably not too familiar with the top side of a truck.…
ytc_UgyTL32cj…
G
@1:40 why keep referring to it as "artificial _intelligence"?_ It is just softw…
ytc_Ugybh2-gk…
G
@multiplesourcesofincome7037 Tesla does NOT have the most advanced AI programs.…
ytr_UgwVM2OXc…
Comment
There is no guaranteed safeguard for your future in the face of AGI. Even the CEO of whichever company builds the first AGI is only as safe as their model is aligned. Even with contained AGI, prior accumulated wealth and land is only going to help you while law and order is maintained. With mass employment and people starving, the only way to maintain order is force. An AI agent or human emperor will need to maintain order for some time, until humans can be replaced for energy/food production, at which point every human lives at the whim of the agent/emperor.
Being a plumber might buy you a decade or 2, but I'm not sure it's a better strategy than just trying to stack cash and land with the most lucrative possible job today. For anyone under 22 or without a college degree, the trades might be the best option available.
youtube
AI Governance
2025-07-07T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy1dLCQwAyY6rzqbvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7CfOxR7W-6ST5icZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzO0fYK7MCQ8y99omZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2orxjO7BQcUxUWwF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwwJcaZYDbiOOfGdCV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2y2jLkh41wxDoJst4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCDwG3GFXb81kbFK54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyujVen9qjyvWDnvfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwTpLMn_Sai3uUksSh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzpWAScfRmTBhmuHO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]