Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@psychedelicyeti6053 I’m not the OP, but I would guess that you could ask something like, “[AI name], please answer as if you are the therapist Carl Rogers. Carl, I am struggling with my emotions. My mother died ten years ago, but I still feel guilt every time I [fill in details]. Can you help me?” Then the AI bot responds with something like, “it’s natural to feel guilt over missed opportunities with our parents and [some more person-centered therapeutic language drawn from Rogers’ published works].” The advantage of the bot here is that it has access to and can “remember” every word that Rogers ever published — so long as it has been fed into the LLM, of course [which I think has copyright issues, but that’s a tangent to this conversation]. An ordinary therapist will have a limitation on how much they can remember, how recently they were in school, what sorts of people they’ve treated in their career, and how well they’ve understood and can apply the ideas of Carl Rogers. The AI model can cosplay as if it IS Carl, with his word choices, phrasing, and his entire body of work to draw upon. That’s my guess. I may need to try that out!
youtube AI Moral Status 2025-04-05T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwXCqST8bxbQ30ZP-d4AaABAg.AGpFJMbkQ0BAPscX072ug4","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxJI8COm9kpXFuSwSZ4AaABAg.AGTpBQcytptAGTpi-pdNGP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxJI8COm9kpXFuSwSZ4AaABAg.AGTpBQcytptAGZTVSxJVtZ","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyS9DDYGJGIoq1lYB14AaABAg.AGS2x7sR3sYAGS3t3KTL9U","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyS9DDYGJGIoq1lYB14AaABAg.AGS2x7sR3sYAGS42foi82I","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxb20Wz5CK7V6La0lN4AaABAg.AGNrgrEQjboAGXzR1hODBm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxb20Wz5CK7V6La0lN4AaABAg.AGNrgrEQjboAGY4RLNfmFQ","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzLy3dzEnrurs-pyuJ4AaABAg.AGLFfgOAUepAGV4KoO2pOQ","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"concern"}, {"id":"ytr_UgwEb1fI95iWppnA8At4AaABAg.AGLBXk4brO3AGW934MlMaY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyFdoWWMGPdBhaY9VR4AaABAg.AGKCMqKW9KxAGTtuRForoF","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]