Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder what could be a super harmful influence of the growth of AI technology.…
ytc_UgxrXA6vc…
G
i agree i think. i miss when AI art was self-aware that it was made as a joke be…
ytr_Ugwc4viyA…
G
We live in this weird world state where shaming anyone for anything has been dem…
rdc_n7sxix5
G
I'm sorry but when a robot says "world domination" this is a artificial sentient…
ytc_Ugyj-QVW6…
G
Gotta get the prompting right - Check me out on Suno Listen to 'She Stay Close' …
ytc_UgxPMKPIl…
G
Personally, I think that people should approach AI tools differently than they d…
ytc_UgxPDqMxp…
G
no AI going to get over shit on site and the bump and grinds laying bricks , for…
ytc_Ugxjdd39h…
G
Open ai. Guys it’s the truth. Test it your self. But one tip keep asking back yo…
ytc_UgwHrE54C…
Comment
By the time that robots can start arguing for rights, they'll probably be educating us about how wrong our moral reasoning is to begin with. Therefore, unless you're making a super ai right now, I don't think anybody needs to bother worrying about this topic at all.
youtube
AI Moral Status
2017-02-24T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Uggbtq-WGdMdsngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggAjot1l7w9IngCoAEC","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggEmH3Lq4V_vHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ughlh2BiQzNAdXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugg0tBq-Ha2NR3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghbXQbC6Eut-HgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiaJXOE27QNsXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UggWMgkXXwlosXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggATgq0eeHyfXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UghF5eT9DDh8F3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]