Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an engineer, For some reason I just assumed that you were a tech channel and…
ytc_UgyiLz9Vp…
G
Sophie is narcissistic and thinks she is the smartest robot in existence and she…
ytc_UgwVvMKXw…
G
This is gonna force AI art-thieves to trace, and then they'll be drawing on thei…
ytc_UgztK3Gfo…
G
Every job? I don't think so.. Nobody's ever going to call an AI plumber or an …
ytc_UgyKUNBZW…
G
@Fern_leaf255 Did you read my entire comment? AI art isn’t a copy-and-paste job.…
ytr_UgycDfkEi…
G
Screw off, honestly. You dont even realize what kinda damage this ai TRASH cause…
ytr_Ugxu7v-x3…
G
So... what I'm hearing is that the AI's are functionally people and should be tr…
ytc_UgyN6jKpK…
G
I already insist that AI not referee to "itself" in subjective terms before answ…
ytc_UgwcvFBIu…
Comment
Nope. Because robots will never be conscious. Emergentism is logically incoherent and has no real-world examples that aren't just faith-based assumptions grounded in a materialist ideology.
If robots appear conscious, then you'd be giving away your own rights to a set of algorithms which is the stupidest thing you could ever do. You might as well be giving rights to a set of dominoes.
youtube
AI Moral Status
2019-08-25T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxIRHSQEa0S_LSgwL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgxI2qf9PuXQg05cX8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzPlBfYWE2UWtHwqNl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4JPgBw-FkT83RQRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBieFYpULLhEq7yLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCZM29Y3b2PJU7cXF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugws5lAVOCK_25o-YwF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZTcHcX_9EVDIVBTF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzgD1J1XJI8RsPRgst4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6t5ffvnuaFM-WtJN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]