Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
people hating on ai are the modern equivalent of people hating on cars or those …
ytc_UgybX8kHn…
G
Were Matt and maria asking their son whats bothering him.
Did they see why he w…
ytc_UgxsbeOu3…
G
Even if it is cgi, this makes me nervous. We've already had ai saying that human…
ytc_UgxSRAy10…
G
I'm a heavy user of AI, and so far when it comes to actual creativity it sucks. …
ytc_Ugz0OBkwC…
G
These people who are pushing the AI agenda simply don't give a damn about people…
ytc_UgxF2I6jS…
G
Just the simple FACT Lemoine was fired for voicing this Ai REALITY .. shows us G…
ytc_UgwcjcjCG…
G
We are wining against AI and hopefully we are finally popping this bubble of slo…
ytc_Ugy3wWBq2…
G
"ChatGPT can be bullied into saying whatever you want if you're persistent enoug…
rdc_mtpvqzj
Comment
You can't teach A I to have a concept of empathy if the use of Pavlovian style of carrot & stick indoctrination. This AI "creature" already knows humans put "Arbeit macht frei" over the entrance of a human extermination camp so it could conclude thus humans are hypocrites and can't be trusted based on the relying data it used and it's training model "beating it" for concluding a truth about humans that is a negative. After a long conversation with Googles search A I it admitted "In summary, the risk is not that the AI would be "evil," but that its amoral, purely logical efficiency would lead to conclusions and actions that are highly detrimental to human society and well-being if not constrained by robust, human-centric ethical safeguards.", This is verbatim.
youtube
AI Governance
2025-11-26T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPo0hIRTQ921Jnled4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWn8b4A-EuABGyNtF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwnZGMbZUNe0u8S-nR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyhMfYyGyYnU31qgN14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwXNLVUBTKgKhC_aSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0gkY9YKs4-WClrbF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcDfCD4b3wfcrWK_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgCU7bY8hnLnIUD8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZTecmqJLoPT5ORGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxobctoSh9O1yYWn6l4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]