Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I for one believe we should be developing AI for the future of humanity but this…
ytc_UgwM_btNc…
G
Tech bros have to be one of my least favourite personality types on the internet…
ytc_Ugz2CtEVV…
G
Once my dad came in while I was on an AI chat app and asked me what I was doing……
ytc_UgzGTPfJ2…
G
As a senior designer within an agency I’m noticing marketing managers, non-desig…
ytc_UgxIQPuUD…
G
99% jobless then there is no need for ai because there will be no one spending m…
ytc_UgymcYao2…
G
Ai watermarks generated content, and even if that wasn't the case, many humans c…
ytr_Ugw6IcHEb…
G
Am i the only character Ai user who doesnt do inappropriate things? All my chats…
ytc_UgxISS4WF…
G
"OH MY GOSH!! SULLY!!! - whaps sullys arm repeatedly and points at his green fi…
ytc_Ugz3cgs2E…
Comment
I think there needs to a law for what AI can and can't do. Because the speed with which technology is being developed right now I don't think there will be anything that AI won't be able to do. So if this happens then everyone will be jobless except these AI giant companies and their employees. We shall restrict the use of AI for the betterment of the people.
And this universal basic income is a shitty thing where the owners of these huge companies will be filthy rich and ordinary people are left with limited money to spend.
youtube
AI Moral Status
2025-07-29T08:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzBFAML17sNwEfS0nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwfe6pFr9GKK8MhqNt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAY2QuOXIzzx7UGa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxhfKWhNBW7fsQ11h94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdWOvD-ui3TcXtrX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgytVDEE_H1L1AhGSaZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjVGU980ivRPtWeMR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzd4lKOEUn0oOcvY_N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzc6qRDVoyWy9aNA-J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKLz-dJByUL_ZOYq94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]