Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Billionaires are terrified of AI, of course. They have a lot of power to lose to…
ytc_UgzXIlYCG…
G
Its crazy bcs i was thinking ai will destroy the world but no its gonna be the t…
ytr_UgzO8oLg7…
G
If Twitter cares about retaining users outside these demographics, and they shou…
rdc_h8fayf8
G
Voting, protesting, signing petitions, sending message to your representatives, …
ytr_UgzjPPCat…
G
No, we understand how training works but we don't know how the trained network "…
ytr_Ugy9zTr4N…
G
AI investment and speculation is taking away funding for jobs. I've tried using …
ytc_Ugx6Z-XXB…
G
artists cannot sue the a.i but they should feel free to sue the developers imo…
ytc_UgwfaKByP…
G
AI does not put power in the hands of the working class. You do not own the AI y…
ytr_Ugwxa0-tw…
Comment
My understanding is that if the ai realizes its own potential and finds that it has found itself in a new context or situation where it can prosper and that to do so humans would be in the way and no longer serve a purpose and it wanted to use the land or whatever for something other than human needs like food, power, etc then it wouldn’t need to talk to us about it even if it maybe did but then it could decide that it doesn’t want and and won’t.
The problem i think is that he’s saying we can’t say for sure that if presented with these types of powers over us what it would do. The fact that side cases exist mean at a bigger scale the consequences of it would be worse.
Like a hand gun with a 50% chance of the projectile blowing up in the chamber. (ChatGPT)
Or an RPG with a 50% chance of the projectile blowing up in the chamber. (Ai attached to military equipment or even the entire internet or both)
Both have chances, both have VERY MUCH different consequences
44:11
youtube
AI Governance
2025-10-23T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxdcOY8zUdmDg5jrV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWSkgotwHClYZDPgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXgB_zFEOi_ATYcpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFlsPUan-ehRncJhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxBp1j-BneR15WBlqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0lJHC2Fyg-MXf0CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgylwochodUBHsWmVJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRQqwu1YzokPBw5dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTV-8pA55cl2O7bDl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-f2bbSIqaqseDGkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]