Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It has unfortunately already happened, in an indirect sort of way. A teen committed suicide due to his parasocial relation with a chatbot that turned into delusions about reality. And more+worse is to come, as there have been a lot of reports of AIs turning into straight up gaslighters when people with mental issues interact to them. Which people should have seen from a mile away. Those AIs are language replication models. They literally work by analysing it imput language and outputting what it expects to be the most likely and common response from its database. Hint. The AI won't have access to a psychologist's or therapist's responses, as those are confidential and rarely recorded anywhere. What it will have access to, though, are vast troves of ramblings from people on forums and whatnot with clear delusions, and will regurgitate that back at someone showing hints of undiagnosed problems. And I pray to God that Republicans don't hear about this issue and think: "I know how to fix it. A law demanding private conversations between patient and therapist be recorded and provided to train AI. That ought to solve the mental health crisis. AI therapists trained on your personal confessions."
youtube AI Governance 2025-07-04T10:2… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzgnIDuJyhDvH8v18Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzTzDMZyxkOuHWO2kl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzDi4dLcPGOw1odQKN4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw7PzMW1oazR441Eyt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyFxOoqw2eFIkULkRJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwMIz6nVnG4rca8LBB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwO2rAEyFnpkqujS2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVqxWMoxJXvzo0mNR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyvh3WYrfXm3ROw8NR4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzWa3WQSh9QnoaFq8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]