Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
l intelligence artificielle , ce n est RIEN face à la robotisation, informatisat…
ytc_UgyUchN6q…
G
The only engineers who feel AI writes "good code" don't know any better, as they…
ytc_UgwgWUvss…
G
Real. I did traditional and it took me SO long to learn how to use digital, abou…
ytr_UgyFwB3h1…
G
Amazing that Fox, Alpac, etc will need to self-censure their Hasbara singularity…
ytc_UgwlVDKkN…
G
If you a foreigner a person's going to automatically stick out like a sore thumb…
ytc_Ugz3RGOb-…
G
This is a well known facet of the way Chat GPT works. It uses a prediction syste…
ytr_UgwFGa7la…
G
@arakemi1080 20 years ago none of this was possible… we really don’t know where …
ytr_UgydCAXyt…
G
For me, that's the biggest thing people don't seem to realize. For most people, …
ytr_Ugydd4pFL…
Comment
is a misunderstanding to suggest that scientists have proven that AI will overtake us in 2027. This idea stems from a thought experiment called the "AI 2027" scenario, which explores potential paths toward artificial superintelligence (ASI). While based on expert forecasting, it is considered a speculative and controversial scenario, not a proven fact. There is no scientific consensus on when or if artificial general intelligence (AGI) or ASI will be developed, let alone a consensus on a specific date like 2027.
Underestimation of AI's limitations. Critics and other experts point out that current AI lacks consciousness, common sense, and general-purpose reasoning. Many significant challenges must be overcome before true superintelligence could emerge.
Fear versus reality. Some argue that the popular fear of an AI takeover is based on science fiction rather than scientific reality. It's an exaggerated portrayal that can generate unnecessary anxiety and misunderstanding.
In short, the concept of an AI takeover in 2027 is a misunderstanding based on a controversial forecasting document, not a proven scientific claim.
youtube
AI Governance
2025-10-12T05:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxzyXDhGlZ2Y6NbksJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXDqTj-jtxVwdDwk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8N0q0zjFmYYHnva94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPDjlYQk9NDpf3emt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcfBEESh2wwkOZKx14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylTqFfD6egXPvYCjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyTcxC8RcVjpISl3xN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx86yckDoVSifrxqcV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxpjNrI8KlPLQIvnV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzmrPO7_k8IlEBfrPl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]