Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Do you think we're in danger of that happening yet?"
Yes. We're already able t…
ytc_UgyneNTDA…
G
I would agree that we can’t predict the specifics, but the general progression o…
ytr_Ugwt9Ps-W…
G
Great video! It's amazing to see how AI language models like ChatGPT are advanci…
ytc_UgyyMvX-Q…
G
It’s not the AI it’s the humans,the AI robots cannot do anything to literally 8b…
ytc_Ugxc9Z6iR…
G
I’ve had a conversation about this exact topic WITH ChatGPT. It was interesting …
ytr_UgwV0GVNF…
G
the sad and scary thing about this many dumb people believes Ai more than legiti…
ytc_UgyAH3W_e…
G
If you don't know chatgpt has invented an ai (sora) it can generate videos that …
ytc_UgwWRYGuJ…
G
This shit is fake but still. A.I is whack bro. I gots no love for A.I.…
ytc_UgwvsE8gH…
Comment
I hate AI and everything that comes with it. We didn’t need this “tool” we created, human intelligence was enough and was all that was ever needed to create a better society. The problem is that some people (with money) see at all this sort of like a game, what else can human invent sort of thing, but they don’t consider the general public and what we want as people. They own the right to push their agenda into other people’s lives, which is so unfair and also dangerous for our humanity in the long run. They are making idiots out of people with all of this technology. Scientists should stop playing with fire here. Just my opinion. As I’m reading all of the other comments I see it’s a shared one, fortunately, but still that doesn’t make it any better because none of us has power to stop this. 😢
youtube
AI Moral Status
2025-06-06T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxKzLbuH6_oAiou6DJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwczAd0fv3pflJHFjp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwtsp01TS67ik1FqE94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJmnixUNSIKUgtYtJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxRSAvCNoLcYjDbyed4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwHWcb_yb453gQtr_h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzjmc6XBfpVIhfmCqp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeCGfosjkJTgonXLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXVWgdQDErzZZXuK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrBd_RBW09ruPc02l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]