Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
but the thing with those thinking processes is that they are still just repeatin…
ytc_UgwnjB-9G…
G
Bottom line - we need to work out who's in power. Who's controlling what and to …
ytc_Ugy-th-Yn…
G
Yet half the internet decided AI bad and anything related to AI gets tons of hat…
ytr_UgwFLog5h…
G
One time my aunt was driving a self driving car while she was in control it comp…
ytc_UgybBUCLC…
G
You just proved that you can not determine motivation or thought by facial expre…
ytc_UgzwuFeUA…
G
We need laws that force self driving companies to forfeit their business to acci…
ytc_UgxJIkyjI…
G
AI sucks, sure, but it's getting to a point where it sucks a lot less than the "…
ytr_UgytUzom9…
G
don't forget that we know much less than him about the newest AI technology that…
ytr_UgwX1qHK0…
Comment
I just think we should be extremely careful not to anthropomorphise LLMs. Remember you are not talking to Bing or Sydney, that’s the role it’s been instructed to play. The actual intelligence behind the persona it plays do not think in the way we do. Defer to the experts on this.
reddit
AI Moral Status
1676618462.0
♥ 132
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8w52lh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_j8vnn6l","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"fear"},
{"id":"rdc_j8vti2w","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8w7ryk","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_j8w8p0g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]