Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree, the way they describe howlrounding in the paper is instead similar to when an ai model will devolve into repetition of previous phrases input and output to the point of failure essentially. I experienced it more frequently in earlier models with smaller memory sets (hundreds of lines instead of thousands), but I believe the specific worry that paper outlines is based around a conversation going long enough for the specific guardrail instructions to begin to corral the model into a state of repetition. Along with suggestions of how to improve models so their output is more varied and less prone to a repetitive failstate. None of that has anything to do with people having psychosis from interacting with overly supportive ai models.
reddit AI Moral Status 1748379524.0 ♥ 8
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mumlx9t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_n8yyjq9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mukf9cp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_mul25xl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mun5iqh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})