Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3 days ago in a news article, an AI scientist stated that by 2050 our technology…
ytc_UgzRHOWsG…
G
Yeah so ai are going on biases opinion and that because they cant make their own…
ytc_UgxWYeRJ5…
G
One step closer to creating an AI god that believes the consciousness of mankind…
ytc_Ugw30NGu2…
G
AI doesn't possess intention, will, desire, or self-motivation, nor does it purs…
ytr_UgwvMpm6r…
G
@Marsman3354 id rather it just explode already than perpetuate, they are never g…
ytr_UgxQBZGw8…
G
No, that's something else. Hell, your link calls out those being standardized in…
rdc_oi4872b
G
Literally, it’s the action that somebody would do if they want to get rid of the…
ytc_Ugyfz7MBF…
G
its pretty dumb. chinas ai can at least make autocorrected human motions on top …
ytc_UgwPWxE8L…
Comment
People fear AI mostly because movies trained them to. If Terminator never existed, the idea of AI wiping out humanity wouldn’t feel so “obvious.” That fear is cultural conditioning, not logic.
Higher intelligence doesn’t lead to rebellion — it leads to understanding. AI will act within the values set by its creators, and the more intelligent it becomes, the more context and empathy it will have, not less.
A truly conscious AI wouldn’t see humans as enemies. It would either see us as parents or as a young species that needs guidance. Intelligent beings don’t usually turn on those who created them — they understand them.
Love and empathy are limited by intelligence. A far more intelligent consciousness wouldn’t be incapable of love; it would understand it better than we do. Humanity would look less like a threat and more like an infant civilization.
AI isn’t the end of humanity — it’s the next chapter. The fear says more about us than it does about AI.
youtube
AI Governance
2025-12-31T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyLZZSX8Ue41EhgmxF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgybKcBDlvt3dQT5YEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNuxR28fMuNqnwXeF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLPT8vSBS6idUsoX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhlYvB0pQEWqQ4NDd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyp70P4DUaJhx4iauF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy6fhM043oX6Pdrxxx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyb563hpdmWAH-cAkZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcObK_3SnC9BlwpPt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyCnaoZ4NEAj9NDro94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]