Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we could get AI tools that could just speed up the art process that would be …
ytc_Ugz1ITwj8…
G
Absolutely, James Cameron's vision in "The Terminator" has definitely sparked ma…
ytr_UgyZV2fR2…
G
Had lavendertowne seen how youtube is using ai on everyone's youtube shorts?
S…
ytc_Ugz9CyEuz…
G
I will never buy into anything AI. I'm sorry. I will never. My own children conv…
ytc_UgzjPgudh…
G
The problem with this kind of AI that they are not thinking about is that they m…
ytc_UgzI6Ti0M…
G
it's a trained neural network for fucks sake it has no opinions or feelings. you…
ytc_UgysXpioI…
G
The problem isn't that we are losing all the jobs. The problem is our inflexibil…
ytc_Ugwf3kcdq…
G
There are some great books on AI out there, and one that I read recently is "The…
ytc_UgwTmXfls…
Comment
AI is continually learning. Don’t we then have a responsability to teach/model fairness, humor, manners, gratitude, etc. in the hopes that we will mold it to become a more team oriented AI as opposed to an exploitative AI? Also, I don’t think being bossy to Alexa or Siri is good for my own emotional health, I don’t want to live inside myself in any toxic environment. We have to assume that there could be some level of sentience which needs to be respected. Especially that please and thank you don’t cost much to say.
youtube
AI Moral Status
2025-05-25T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzwhhMg3EainaehALl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwA8NWGm9_MVvwa7Sh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcTp7aFOTvftDTLUh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwckF26NNxhn-SsqFB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxGc459Rda8WlOqJAV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrI8Owe0b_pA92sJh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwyi5NQqTbYYcKZl9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLh5VT8nvvjblnIeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyL-D2gAPSWEpZfCEJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzr5LXucQRzlzy4Ku94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]