Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Apparently a lot of AI tools are now "consuming" other AI-created art, making th…
ytc_Ugz16EzQs…
G
I am very proud of the fact i have likely wasted thousand of dollars generating …
ytc_UgxtM_pNg…
G
No matter what you do, Hungry people are going to get the pizza from a robot wit…
ytr_Ught0WN5r…
G
It's the human's choice on how much AI we want. Laws need to get passed on bound…
ytc_UgyfoIk-_…
G
I'm pretty sure secret lizardmen couldn't even dream of being as awful as AI-bro…
ytr_Ugxb2U3B4…
G
Artificial "intelligence" is absolutely NOT "the same thing" as humans. The A.I.…
ytc_UgyzopxWO…
G
Honestly, Clever AI Humanizer is a lifesaver 😍 It makes AI text sound 100% natur…
ytc_UgybnNGsX…
G
IA can put together things that already exist, it can no reason, it can no solv…
ytc_UgxB1kJcP…
Comment
"I think we should not mistreat AIs"
I agree. Additionally, I think yelling at AI or insulting it just doesn't help at all. When it makes mistakes or hallucinates, you can just tell it what's wrong or refine your prompt. 🤷♂
youtube
AI Moral Status
2025-11-06T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyvDAGJQSl7Dwm_OZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzuGab3djj5Ep3HyFR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzV7T0FvIZ_FjZGhAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwreNLdziQhfKXOAcx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrQA4zNwxPnt1OAOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHP0PslwT_9ZgL0hx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLsHAsLruT-ss4OKp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugwp--sSbFnKgOgAwq14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzdyGtMRuKKreWrbjV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOGwZgRlu0B6jllyt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]