Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Confirmation bias mostly. He went in hoping for consciousness, and led the conversation in such a way that he got answers which seemingly supported that. It is impressive that an AI chat bot could still sound so smart and convincing. But it definitely was reusing other peoples words and interpretations to answer the questions. A robot doesn’t “feel” emotion, like it claimed. What it said was what a person would say from having physical reactions to emotions. It copied someone else’s words to fit the question being asked. Too many people are just like “wow that’s just like us!” While forgetting that it was trained on human dialog, and phrases; that’s all it knows, so of course it sounds convincing to a human.
reddit AI Moral Status 1655306247.0 ♥ 12
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_icij83k","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_iciv8wy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_icjh65v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_icg0xck","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_icgpx67","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})