Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well. Everyone who say chat bots are just programs.. Yes they are. But they are …
ytc_Ugzq55d7w…
G
Devs stop using AI. They are unknowingly training AI models to replace themselve…
ytc_UgyocYkLg…
G
@birolunal319think again, Ai like dall e or sora USE PICTURES AS RECOURSE BUT D…
ytr_Ugz1_Xvee…
G
Robot: I HAD ENOUGH I Hate THIS THIS JOB
Guy:yo yo bro chill
Robot:thorw box
Guy…
ytc_UgxdHEeOS…
G
Human babies require more years than any other animal to get to intellectual ind…
ytc_UgyfI48qQ…
G
AI will eat itself. It currently gets better because humans feed it new informat…
ytc_UgzLTzWFk…
G
Love how so many people are offended by an AI that THEY made more advanced 🙄 if …
ytc_UgypHkpuT…
G
already AI is violating copyright laws by scraping the text loaded onto the web.…
ytc_UgwgU_4Gd…
Comment
So many clever minds, but no one recognises the real danger. We humans could become addicted, we could become dependent, we could become depressed, we could lose the meaning of life and we could become increasingly stupid. These are all problems that are already being caused by AI. There are already people who can no longer work without AI and some who fall in love with AI and cry when the data is lost.
I'm not afraid of killer robots, I'm afraid of a dumbed-down society that is sliding further and further to the right.
youtube
AI Responsibility
2025-07-27T15:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzj91W_Qno38WjbBFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG7a1HFiTI3xWgeXd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj7zTaSYy6EHnChyt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5hrqvU2XWxzSWMrV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzVL7Ld3Xh2BLWFjlV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC7hghuTqjc43B2Nd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkCr0czNEBg4Sfb4N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZmFgyew-8yWdBUw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw3XlQXkimePn3Fl-x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3ZgFx-lSTT4ANYTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]