Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
***”What does this neural howlround look like exactly? [My friends and I engaged with it in a non serious way, and after two prompts it was already encouraging us to write a manifesto or create a philosophy.”*** I just read the conversation. You literally led the AI where you wanted it to go. You told it to create you a philosophy and the AI based it on the previous context which is normal. The AI met you at the level you were communicating with. I’m not saying this kind of behavior in humans with AI isn’t happening, because it clearly is. But don’t make out the LLM model is pushing it randomly onto people not discussing it or seemingly/essentially asking for it.
reddit AI Moral Status 1748376274.0 ♥ 31
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mukat9v","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mukbjm4","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_muoahcz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mukqqng","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_muktlf9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]