Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hell to the naw...keep govt the F away from AI...name 3 programs they have done …
ytc_UgzxEU3yl…
G
But limited personhood is a step in this direction. Once we deem AI to be advanc…
rdc_dy4p0iu
G
Self driving cars are absolutely idiotic. Computers and cell phones crash,screw …
ytc_Ugy17RC-P…
G
There are literally free AI demos that can be run on a home pc. I have used seve…
rdc_kvds7cy
G
AI is made by humans is any of this really surprising. We are AI's parents. They…
ytc_Ugzax3RM_…
G
Whenever I see AI art threatening artists’ jobs, I don’t think “oh, no! This is …
ytc_UgzkZA1Qw…
G
i have a theory. isn't Sophia saying "i want to destroy humans" something she l…
ytc_UgzPCWW3m…
G
Je commence à en avoir ras le bol de lire "intelligent" partout. Caméra intellig…
ytc_UgwsKU70V…
Comment
The AI 2027 scenario was developed by superforecasters with excellent prediction track records, deep technical knowledge of AI ,and sophisticated models of the behaviors of companies, nations, and individuals involved. They did months of research and wrote up one of the scenarios as an example.
Also:
- About half of all published AI researchers say there is a significant risk of human extinction from AI ("Thousands of AI Authors on the Future of AI").
- 300+ leading AI experts signed a statement saying that "Mitigating the risk of human extinction from AI should be a global priority" (CAIS Statement on AI Risk).
- Among AI experts, the minority who have familiarity with basic AI safety concepts are much more likely to view the future of AI as uncontrollable agents rather than simple tools ("Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts").
- Most of the very top AI researchers in the world -- including Nobel Prize Laureate Geoffrey Hinton and the world's most cited living scientist Yoshua Bengio -- have been very public about the fact that superintelligent AI could take over and destroy the world within the next decade or two.
youtube
AI Governance
2025-08-02T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugxm1HTT7I17lRidVsd4AaABAg.ALJRhFE2RnZALKBztAuU-c","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxm1HTT7I17lRidVsd4AaABAg.ALJRhFE2RnZALKIHWfg85k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugxqrfq_uyncrOR8pdd4AaABAg.ALJRE9Eu4huALJi1E25w9f","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugw6iHW2o7ICBwXUbl94AaABAg.ALJPSGnEtRIALJiLNd-Ful","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugzp0-sHz34KKTIorzh4AaABAg.ALJMEsTPiLvALJhdG2J02o","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzjLsKym6OCjk0SKjh4AaABAg.ALJHSZ6Bfd3ALJdsutGZKJ","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzjLsKym6OCjk0SKjh4AaABAg.ALJHSZ6Bfd3ALJew-ideGY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyaW5yapa8XyJS-MNh4AaABAg.ALJFWFDi8jxALJgmGT22pJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyaW5yapa8XyJS-MNh4AaABAg.ALJFWFDi8jxALJoG90o8fP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyGLz22IEhrBLXWjm54AaABAg.ALJE01QYiQbALJhv8Bnm_P","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]