Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does ANYONE look beyond this banter and understand how DANGEROUS it is for Human…
ytc_UgwSPf_xk…
G
I think that a lot of AI-generated art lacks that quality, but I have generated …
ytr_UgwG83gzi…
G
AI combined with social media is the modern Tower of Babel.
The equating of fals…
ytc_UgxSUiFo0…
G
frrr and i hate how people say julia is ai like bro your just jealous you cant d…
ytc_UgwdJXBkO…
G
It's all a red herring. The immediate danger isn't a rogue AI, it is a Human abu…
rdc_l5vm32l
G
You could just use Ai to find out 😅😅
Here you go:
"The Blank Slate: The Modern…
ytr_UgylzyazH…
G
AI already talks to you like a human n you can't hardly tell the difference at f…
ytc_UgxmKIJ5v…
G
The opinion of the robot is the opinion of the person who wrote their code.…
ytc_Ugzw9VOoH…
Comment
I have no "fear" of AI trying to get rid of humans because of their "inferiority" or trying to "take over the world". ONLY human beings are capable of such atrocities, only human beings are so evil. I DO believe that evil humans, in their greed and insanity, will use AI to continue to overthrow as many nations around them as possible and further enslave their fellow human beings while killing all those "useless eaters" they have no use for. We already live in a world where human life has no value unless some wealthy person wants to use you to make himself richer. As per USA "elites".
youtube
AI Governance
2025-08-01T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyCLBnvRRW2NkIF-eB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzg8ZN-A0jXFzfapHF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgywNEQUtuam7n9Eg_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwkL2crjPVckKNTi-N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwTTE2fXPu2lDSZyFd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzKbXejLP_Zosm4lKZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy-Fp8Dg6Vx9plRWKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugx9DWWWUV3RIxzm-Tt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgwPYaOTqr66o2fFqld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxXExKUrAeSNQHtD_d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}]