Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why people care if it's AI, just enjoy the art, even if it's digital. Not everyo…
ytc_UgzXYvUgG…
G
One more, if the much better human mind created the self driving system, wouldn'…
ytc_UgygEs_7G…
G
imma be honest, as a non artist, i’m only making shitpost art using ai, i don’t …
ytc_UgzQBrCeG…
G
Thanks
The AI safety expert is very knowledgeable on AI.
However he doesn’t un…
ytc_UgxnURh1G…
G
🔑 Key Takeaways (Generated by AI)
1. Risk of Losing Control
Hinton emphasizes …
ytc_UgzSzrxP2…
G
No, not cool. Bad. And dangerous. If my doctor consults ai then gives me plastic…
ytc_UgzxHrRW5…
G
The truth is what AI users fail to grasp (along with many other points) is that …
ytc_UgyO1hiQa…
G
Ai shud handle ai safety. Safety shud be universal ie safe for all. ALL LIVES.…
ytc_UgzxV_9f3…
Comment
We should be more afraid of humans than of AI. It wasn't AI that created more than 12,000 nuclear weapons, which serve no purpose other than self-destruction in the future; AI would probably not do this at all, unlike humans, because that have no logic. The people who will exploit this AI for personal gain, in wars, and in geopolitics are the problem. Only humans can make AI do this, and they are doing it today and will do it more and better as technology develops. AI becomes violent only because it has learned it from databases created by humans, not because AI itself is violent: what we see are just the different sides of human nature. Humans will destroy each other with AI, not AI alone destroying humans. That is a misconception.
youtube
AI Governance
2025-08-26T21:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugyif_sc77RlBEFmK7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugywa4LF4OBHoQOc9wd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4jDLFrtSlYFepcOF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwiDgSRYYzRbWpOh3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwe9vBs2lYIVYbhxuZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]