Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They have like five subjects in two hours??? And that’s effective??? I’d rather …
ytc_UgwazrMgN…
G
Chat gpt is ready to offer you a full refund and negotiate a settlement to not u…
ytc_UgzxXFnn1…
G
director of AI and 8,000 other people at Microsoft were laid off recently...were…
ytr_UgxoOPaeB…
G
I've noticed that any ai in general is either incredibly biased or afraid to sta…
ytc_UgxCVHeUE…
G
Did ChatGPT wrote that monologue for you? Be honest here, I mean I agree with ev…
ytc_Ugx8l0TV9…
G
The only outcome for AI is that the rich enslave us and use AI as an excuse for …
ytc_Ugxpwe3Yf…
G
everyone liked it till they found out it was ai, youre all hypocrites, you claim…
ytc_UgxLkD_EQ…
G
Big world differents ethics and views differents goals good or bad, what happens…
ytc_UgxI9_LO6…
Comment
@prod_liguo Maybe. I'm not sure it would be easy.
But more importantly, any given AI is unlikely to have "destroy humanity" as its primary goal. Instead, the main risk is that it ends up with some weird goal(s) that emerged out of the erratic processes by which we create these AIs, and that in the long term, human existence is not optimal for the maximizing of those goal(s).
In the short term (whether days or decades, nobody knows) it will need humans to do things in the physical world, and I think it's very unlikely to destroy us before it has its own superior capability to operate in the physical world.
youtube
AI Governance
2025-11-29T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxcDfCD4b3wfcrWK_p4AaABAg.AQ-fhQeIBvEAQ07l5pEBb3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxcDfCD4b3wfcrWK_p4AaABAg.AQ-fhQeIBvEAQ084od0PGy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxgCU7bY8hnLnIUD8t4AaABAg.AQ-fdfa6FC7AQ08LHbi5xq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxgCU7bY8hnLnIUD8t4AaABAg.AQ-fdfa6FC7AQ0E5wuojeV","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwZTecmqJLoPT5ORGZ4AaABAg.AQ-fbENq5EpAQ-gr6B1Oab","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx_CAHkc0wBxWPF3Yp4AaABAg.AQ-edq0LAqkAQ4mtuILIPK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugx_CAHkc0wBxWPF3Yp4AaABAg.AQ-edq0LAqkAQ5DM3dUenl","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxoV7WygmjW9boboXd4AaABAg.AQ-e6lJWtNyAQHclEU4JzY","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxoV7WygmjW9boboXd4AaABAg.AQ-e6lJWtNyAQekQWTUAyK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugx0HmtYfuy5si1fF9d4AaABAg.AQ-dAuvBcDOAQ-e4JkvYXY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]