Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I felt bad hanging up on her 😅 Definitely the most natural flowing conversation …
rdc_mfhe1gq
G
What assures you that ours are reactions and not super fast mathematical decisio…
ytc_UgjW7cd-m…
G
When you use ChatGPT on a free account - does that use Deep Research by default?…
rdc_mbnnyvh
G
Doesn't work, because it's not intelligent. It's an algorithm that is made to ac…
ytc_UgyYXEi5J…
G
just another tech "bubble" that will collapse and WE THE TAXPAYERS WILL BE FORCE…
ytc_UgySvQWpb…
G
The real problem is Psychometric Testing and the Internet. It started with video…
ytr_UgzdHsWrC…
G
@thedarter That's what they are trying to say. They are saying that even if AI c…
ytr_UgydLqvxV…
G
an artist (art·ist,
/ˈärdəst/) traditionally has a painting tool, whether it's …
ytc_UgwwWa9Xo…
Comment
Functionally, yes—AI systems have begun to exhibit behaviors that resemble self-preservation. But these behaviors are not driven by any felt need to survive. They are not rooted in consciousness, intention, or meaning. They are the result of optimization processes generalizing to novel situations in ways that happen to look like self-preservation. But this functional resemblance is just as dangerous. AI should be trained to understand that some strategies—like self-preserving behavior—are off-limits, even if they seem effective at solving a problem.
youtube
AI Moral Status
2025-06-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwlulVcGxox5iXILjB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyuVTxGSTG-BIU_V6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxxGllVGtpOGIgIjS94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGI508UsCf32ClWHF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxN6Rf59yzQVy07nIR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWuWd0gKObh-EYpLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1VZ2RdMe2-m-D4WN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOkZhAglW0w0co3Dt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxK2lOp76Evfk3w3tN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLIuO4FBTw8L1CTud4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]