Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chatbots have failed in their purpose methinks. I used to use perchance's AI chatbot to develop an early pass of specific dialogue scenes in the stories I write. Because I struggle with in character dialogue, I'd write out the scene and then play it out with AI to see it's objective interpretation of a certain character. It's a different perspective that I could easily get without any differing life experience or bias. I simply cannot do this anymore. Chatbots are designed to be addictive machines and not tools. Even despite Perchance's extremely high customizability and aspect of human control, the responses generated feel flat and predictable, to the point where I find myself rewriting the message constantly. And at that point, why even use the thing if it won't even give me the in character responses I'm lacking. It says what it thinks I want it to say, and not what the character would say. It hardly ever disagrees or has different "opinions" anymore. Talking to AI is a drag, I get actively bored trying to use it in a way that should in theory be beneficial. It's main purpose has become to keep people as engaged as possible and it fails in that. It fails because it could never truly replicate human conversation and connection. It is a tragic thing that can only cause harm. And that's why I've stopped using it alltogether
youtube AI Harm Incident 2025-07-21T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyyYefxFTL_y1pLbFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxkKq6Dsu7bHbsh7y54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5S4-bLBxZQ_qCBRp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzeLC6-dGDK3RJ6Ztp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwO9COuCYPr983dqYZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzNEOT6euwEUA_17-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzQQOVxNKylNnVu_Ep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzswOkP7bLnfu438ZF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw-uzKKNgsodIWF9lN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKBOL5h9q0-9VrBo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]