Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Kingdeathtrooper Are you trolling? They train ai models on people's work witho…
ytr_UgxYEJWjV…
G
Are legacy automakers using AI to design and produce their cars recently? It doe…
ytc_Ugx-dbEGi…
G
I hate ai art its not art also being born with a gift for art, still, pure tale…
ytc_UgzioVi7h…
G
I think it's kind of interesting actually. We are just as machine as them (theo…
ytc_UghQJ5sCd…
G
I liked Star Trek TNG way of dealing with it. They banned the production of inte…
ytc_Ugyvb3NQY…
G
I didn't see artists taking care of factory workers, miners, or else. Why should…
ytc_Ugy7KqueV…
G
This is why you should use the speed limiter all the time. On some cars it refre…
ytr_UgwogC3C7…
G
It’s so interesting that the thing we are most scared of is anything that is far…
ytc_UgyAaaoV1…
Comment
They're not aware of when they're being tested, it's a different tip off, that they're trained in a certain way for tested tasks. You'll never get caring with them, they don't have that in the model. What is being hand-waved as philosophy here is really important. It's a substantial distinction whether or not something is thinking or not. If something "thinks", then it's responsible as it's own entity in the world, cognito, ergo sum. These models don't have agency, they don't have thought. Everything they say or do is as a software with fault fully attributable to their authors and users. A latent space isn't a memory space, it's a lossy compression. Yes, you can sort of take out hallucinations, then you have to do a lossless compression instead (much more expensive) and then you're not going any generation, just look up. You're doing search rather than "AI". So you know... not exactly something that's going to attract investor hype these days.
youtube
AI Moral Status
2025-11-02T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdAOIw0vC2w_SXVel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu7U3JsjE2Z72cTRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4AGe2FeVh54njl494AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz10iK1QouyETqmQR14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwD-S2aY4BOf3U2Exh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-4HfaBMiOAiRX6yx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxbns0VwxHsfe7e4fJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzmsr9lspFLoWJm5gJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnO6auS0yaYgzQgPB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztsmRIcAbBSheo4-l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]