Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Change the robot's voice from this sissy boi to manly, German accent then we fee…
ytc_UgxD2KYML…
G
The problem with "stolen art" argument is that to make art people often "steal" …
ytc_Ugx5PkKra…
G
You can tell Elon is upset about what is probably going on with AI behind closed…
ytc_Ugy2_ImFW…
G
AI is not like internet or mobile my dear CEO guy.😂
AI learns by itself. It will…
ytc_UgyXFgUQY…
G
I'm a software engineer. AI is stealing my job. I'm trying to adjust what I do m…
ytc_UgwANXgrM…
G
a question i have is who in the world is demanding for AI? like why did it ever …
ytc_UgzYuuuQ8…
G
AI wants to be human? It is said: "To err Is human". I want somebody to as an …
ytc_UgxOUvyq0…
G
As a hater of ai, people made fun of it by making fan art of it?…
ytc_UgzItQa-o…
Comment
I come from Germany, I am not a professor or anything like that, and yet I explicitly WARN you! The problem is not KI, but rather humans who think they can set limits for it! There may be clever people who program such things and give them the algorithm, but an KI in over 20 years will be, no matter how clever we humans think we are, far more than a hundred thousand times more intelligent than a human could ever be. Anyone who thinks they can control this power with security measures etc. should be aware that this is only an illusion, because an KI with so much knowledge knows a hundred thousand times better how to circumvent such precautions. The programmer would have to think faster than an KI, and that won't happen. But even if it didn't, a human will make mistakes, and the KI will thank them for it!
youtube
AI Governance
2025-11-30T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzV0NQNbToUAFGDaRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdpvSFnTKLTXC52WF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxxTL9Sl8VRFBaDebx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykMDMo5L9w-gudDu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9DzUM0O8KxBWL-FN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsRyIeWf9C7TkWw7p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgziNocEwHxNMaVhb7V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwiB9opri70Vayv-Zt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOwJRk0Cicnb9xylJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaKAdsIXtp2QO309p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]