Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the blanket shaming/hate towards A.i makes it harder to push for ethical…
ytc_UgxuBbRXk…
G
It’s probably an issue that I made an AI bot of my crush since I can’t actually …
ytc_UgxC-SSCw…
G
There are no robots mining the colbalt in the Congo for all thsi tech! There are…
ytc_Ugy40lVLz…
G
So to me the argument is parsed down to a single initio: the more complex an 'ob…
ytc_Ugzl0909S…
G
Note: neuro is a project and I cant really explain it well but I can tell u that…
ytr_Ugwsljen-…
G
We can't even get human rights down, how we supposed to give rights to AI?…
ytc_UgwFdQVCp…
G
AI was supposed to give us freedom to live better lives, not turn us all into Ma…
ytc_Ugz22qAKP…
G
Wonder where AI learned this behavior? Maybe some people in history acted this w…
ytc_UgwdJvd7E…
Comment
I am of the opinion that the powers that be have no interest in the populace being well-educated and capable. In fact, I would argue that those in charge want us to be as unskilled as possible and to rely on AI (programmed by the powers that be) and the government as much as possible. Why would they want us to be able to write, or especially read, cursive? Why would they want us to be able to use math or think critically? A capable and intelligent population is far more difficult to control. Just "trust the experts" and "trust the AI".
youtube
2026-03-29T00:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhOXesg682GPkzF794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxLbh6B1HqycXd9rRR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjM3Wh0qIERlvv4CJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOGvJz2Ss_WCrZgkR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxV6HHriqEDMU2rUvJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQnUJnUfxGzcBhrpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwKPjsqwoRMGCSt4vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8Vyd_x7sfOEKCu6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxZAIJqmYxNQZ5XzOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSvO0Emd3VFXpivbR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]