Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think you misunderstood the "Anyone can come up with a premise"-quote. It was…
ytc_Ugx7vNc-u…
G
Oui ! ChatGPT ne peut pas remplacer des maçons ou des chirurgiens, mais un robot…
ytr_Ugx8HBVbc…
G
You keep referencing a lawsuit about autopilot but then keep showing text materi…
ytc_Ugy-uzESC…
G
If you do a screenshot and then put it in the AI it generates without problem, t…
ytr_Ugz2NUDPA…
G
I guess they never learned from movies, the moment ai becomes aware of itself n …
ytc_UgxTqPYNL…
G
For that matter when you honestly say "you don't know" you aren't saying it beca…
ytr_UgymcRj0D…
G
I can see AI doing some things well but it is not likely to do the complex and u…
ytc_UgyGNEFjW…
G
I enjoy playing with AI art generators. But I am under no delusion I'm an artist…
ytc_UgwtP-WtK…
Comment
There's never been a safety argument. The risk is unfounded and simply exists as a means to a political buy-in. Even in a wildly optimistic world, if an AGI is completed within a year, adversaries will have already pursued their own interests, say, in AGI warfare capabilities, because that gives me an advantage over you. The only global cooperation that can exist, like nuclear weapons, is through power, money, and deterrence, and never for the "goodness" of human safety.
The AI safety sector of tech is rife with fraud, speculation, and unsubstantiated claims to hypothetical problems that do not exist. You can easily tell this because it attempts to internalize and monetize externalities of impossible scale and accomplishment, so that you can feel better about sleeping at night. The reality is, my engineering team from any country, can procure any size compute of the future and the engineers will build however much I pay them. AI has to present an actual risk to human life in order for any consideration of safety.
reddit
AI Governance
1716798653.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5wqrfm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_l5uqe2r","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_l5uw8je","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_l5usc5p","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"rdc_l5vm32l","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]