Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm raising two actual intelligent beings. AI isn't intelligent for it cannot re…
ytr_UgxsF5Qd5…
G
If you want to put all our apples in a algorithm then you will have no control i…
ytr_Ugym8pTee…
G
Bad news:
We still have yet to figure out a way to stop AI from killing us. Mea…
ytc_Ugxpf7f7E…
G
14:28 I don’t know if this is helping or not but almost all of the art I make an…
ytc_UgySLPYjQ…
G
@c.eb.1216 We did not have capitalism , we live in an oligarchy corporate coun…
ytr_UgwXXH6AP…
G
I heard you say there’s a good reason to say thank you and please to chatbot but…
ytc_UgyAlhfiU…
G
Im sorry but most ai art genuinely looks uncanny,like if i looked at it for a ha…
ytc_UgwuQ-zMf…
G
@disorderandregression9278
he's so mad about AI that I can't take anything from…
ytr_UgzVOTiRO…
Comment
I see and treat LLM's as a "thought mirror". As in, I use it to game ideas and concepts and do speculative worldbuilding based on those ideas. The results have been quite fun and intriguing, but it is clear that it makes basic mistakes and glitches occasionally. No, it is not sentient.
youtube
AI Moral Status
2025-07-10T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyVshU967lWNT1W5vp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxX5AuxhGTXFds5c1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyQSS19T4b8k9nVlOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzb452AO7ltEr6mjj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhQN9DeZt5ozPL7rJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzkys4fiGGHHPrTLdt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOqiQeELXmTEyLOGF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8SsvWAGagM2d4Ee14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMf_8U9xkDX0tmBAF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJ5TOggBMe_m17RHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"})