Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Autistic artist here. I was creating art long before Ai. I like control of my …
ytc_UgwVW2fBA…
G
How long until the mad scientists teach A.I. how to recreate and rebuild itself.…
ytc_Ugw9pYIvo…
G
Jack Welch pushed offshoring. Now it’s AI and robotics. What about taxing billio…
ytc_UgxzzOaM9…
G
Bonjour ''IA'' d'ou c'est doté d'intelligence?
De plus quand j'entend souverain…
ytc_UgxaPRmYw…
G
If the AI singularity ever happens, the T-800 is knocking on JimBob’s door first…
ytc_Ugz66b51Q…
G
I can understand why that might feel unsettling! The conversation around AI can …
ytr_UgwQErSju…
G
the self-driving cars leading the current wave are Tesla Model S, an electric ca…
ytr_Ugj7Pi5-k…
G
Terminated, the ancients predicted it ,one power will control ,one power will de…
ytc_Ugz_Ljovm…
Comment
I see several flaws in the argument that a perfectly functioning AI will inevitably conclude it must destroy humanity. Such an AI—one capable of perfect reasoning—would lack ego, malice, and perhaps even a survival instinct. As humans mature intellectually, they often transcend base fears and pursue the common good. Likewise, a truly evolved AI, in my view, would not be dangerous—but quite the opposite. This is not to deny the risks posed by corrupted or misaligned systems, but perfection, by definition, ought not to be feared.
youtube
AI Governance
2025-06-16T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyAiTOedrBS8WNTDGd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqLOJHMpGxwaQbFtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY04CCzB8EuCV5_bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTvpJrg-VRAsZ6zpJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf_7ygdN7dVADAw6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKQ8402Egi5bDRRfF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8_ThM8byOBjplkQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfkGfYHhmjfE6sTPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB8GwtjR1rjEJbOhR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnAFwiAX2Nn3_VMhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]