Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
PHDs in CA are now selling tamales out of their trunk to survive. 🫔 replace brai…
ytc_Ugzm4ViWo…
G
yeah i guess i could agree wanting to watch an ai made show that looks at your i…
ytr_UgzBtHDDt…
G
The only thing I was convinced of was that she didn't understand AI fundamentals…
rdc_gm0ks7o
G
Thank you for those altered algorithms
It helps me not being able to find anythi…
ytc_UgyS4XCAd…
G
AI (Grok 4) Saved My Life from Heart Disease
AI (Grok 4) saved my life when n…
ytc_UgwoOzlwb…
G
As an author AI Art Tools can be very nice to bring your ideas to live.
Ofc this…
ytc_Ugx8kcjYy…
G
As a software programmer, this is mostly believed by people who have never writt…
ytc_Ugzyh9rMG…
G
this is this most fakest story on the internet ive ever seen this is most likely…
ytc_UgzzF13tZ…
Comment
Because from the perspective of an ASI, it would be an optimal use of resources to transform the ecosystem into usable material for its own purposes. These systems, once trained to achieve goals, behave like super-optimizers. Such systems, endowed with intelligence completely surpassing us, will seek to preserve themselves, to self-reproduce and to maximize their power, through the logic of instrumental convergence, whatever their ultimate goals may be. This means, inevitably, eliminating all living beings to use the material they are made of for other purposes.
The AI industry should not be allowed to continue this race for superintelligence. Until the alignment problem is resolved (and it is far from being resolved), we should prohibit, through international treaties, the construction of systems that exceed human intelligence. We have been able to make international treaties against nuclear proliferation, against human cloning, and against bacteriological weapons; we must do the same against AGI before it is too late and we lose control forever.
youtube
AI Governance
2025-08-02T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx9p4XF2rrOCp0yAwB4AaABAg.ALK0CuyvTE3ALLELzOA5v7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx9p4XF2rrOCp0yAwB4AaABAg.ALK0CuyvTE3ALLUM1vRDiC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw9Q-7DN9A38yOfffp4AaABAg.ALJwsKfOMFWAMOL7fH3Wmi","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgwoJFZ6CSfCHqfLWxB4AaABAg.ALJwC0go-40ALLVx2HbQXu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyVaiDcbu1CsmYWlwh4AaABAg.ALJmTKvShSQALMArcNP1vW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxUuMTpelrDoh6hbX54AaABAg.ALJihMhOBNPALNwkWq250u","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx4Dco6u_--9YGdzWd4AaABAg.ALJiOJxXjEiALJrmc2Jj0C","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwUPHz2OQK8S4klE9d4AaABAg.ALJhQkLSGw2ALNDgF5r99U","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugwj8m73Ln7TbEVMp9h4AaABAg.ALJhGnrpcNfALJiryE1VZf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwyeDOJtXbxCYqlVhl4AaABAg.ALJhCPXwmJYALJiTIin5n3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]