Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am not worried about AI is taken over my job, I am more worried about the mess…
ytc_UgxEbb0zF…
G
The scariest part is how much neural networks have developed, that the only thin…
ytr_UgxKcsfEu…
G
Man if I see an AI car start pulling out into the intersection testing me I’ll h…
ytc_UgxseFOxe…
G
Very good explanation. If the same people that are happy with the rise of artifi…
ytc_UgxWKoRu3…
G
Companies aren't adopting AI just for fun; their main objective is to integrate …
ytc_UgyX-YCmP…
G
Don't listen people. Nothing will happen this ai is just selfgoogling bot needed…
ytc_UgyoHGFUS…
G
I knew this was going to be one of the main things they use AI for.…
ytc_UgyUBDk3O…
G
I get where you’re coming from! The advancements in AI, like Sophia’s insights o…
ytr_Ugz2x_JFV…
Comment
Why AI = All Humans Die
Humans require very specific and precise conditions to stay alive - and these conditions are extremely rare in the universe.
Out of 1 million possible temperatures the planet could have, we can only survive in a very narrow range of them.
Out of 1 million different chemical compositions the air could have, we can only survive in a very narrow range of them.
And this goes on and on, and each condition stacks on top of each other.
We also know that those exact conditions are NOT the most optimal for machines.
That's why we keep data centers in low humidity, low temperature rooms.
Wouldn't an AI prefer if the entire planet was like that?
And if the AI was powerful enough, would it not attempt to do that?
If you disagree, you must either be saying that:
a) AI is not going to get that powerful.
b) Intelligence is not hardcore about optimizing for goals.
c) The AIs might want that, but it will not be willing to hurt humans in order to do so.
I'd be very curious to hear what others think about this.
youtube
AI Governance
2025-01-14T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxtgoHjhkpFEjQD51F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgznnaeECEmpOUOQ0UV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5uoTRQtcmtnoF81p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzC1s9GCIb9td0Thcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzClmfl3tfSoFR9BvR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwA_ncTXpBp4zIgUAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAR1OiNbuwL_gsxSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"},
{"id":"ytc_UgwEl5WDy6kAfpF0NFZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxwee-n-EOyvXX7TIR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMYzRd9-d_dSNKUwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]