Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this would be an argument if LLMs consisted of only their pre-training segments.…
ytc_UgwbqBOXY…
G
So in an EMP driverless trucks will potentially be stopped? Otherwise, a capital…
ytc_UgyfWllzR…
G
I'm not too big to admit I use AI where it should be used. To support my home Dn…
ytc_UgxkNpxKG…
G
If we automate war. We have no reason to not do it anymore. We're so boned.…
ytc_Ugxag3HFE…
G
the way to make AI have empathy for us, is for us to have empathy for all beings…
ytc_UgziHEahv…
G
“It” probably refers to the class of intelligent systems. It’s likely that a sup…
ytr_UgwQMkurr…
G
it's a nuanced subject, however I'm not sure it's possible to make concrete stat…
ytc_UgyI7bXN2…
G
honest opinion but the only Ai that those people should be using is grammarly be…
ytc_Ugzfh0ose…
Comment
I'm having a hard time imagining any good reason to fear that AI would ever "want" to wipe us out.
Did I miss something?
Just thinking what AI might think would be useful in surviving human society?..and would it not be intelligent enough to know it could not survive without humans?
The only reason I can think that it might be motivated to do away with us is that it could potentially put two and two together in regard to a species that is so certain of it's own intelligence that it just barrels ever onward with any new technology...barely pausing to consider the potential disasters that could result from them...and this could become so intelligent that it becomes crystal clear (to it) that humans blew their opportunity on this planet...and even though AI itself would not be able to survive without humans, it would not care. It's not human...It would just be "doing it's job" as it was designed to do...free from the self-deception that plagues human kind...getting ever more intelligent and just doing what's necessary for the optimum outcome on a planet whose occupants seem hell bent on destroying themselves.
In "this" scenario, maybe it would actually not be such a terrible thing?
youtube
AI Governance
2025-08-25T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx82o-Rw_g-YLZOHXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAN4t2gofOydswZQl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyhdiLPPpTk-JcaDQd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyM_rcvQPGZRfRQ1D14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyT7Wf0Am4F5mjY8EB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFr_vFTRkFHoeYccl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3XazemqRqyqRgonF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxfi1OjXxrDrvIwD0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1qZaFfMok7xId8r14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCvLCLpSt2htqQ1fV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]