Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm in my last year of studying to become a social worker and I recently used AI to give me some ideas to write about a method I used in my internship and how I applied that to the field I worked in. On first glance it looked pretty smart, but when I really read it, with the intent of understanding what it had actually written, it was utter garbage. AI used key words associated with the method and the field I was working in, but it made connections that made absolutely no sense. I've also heard of people using AI as a friend/therapist which worries me a lot, especially since I saw a report of a teenage boy who fell in love with one of those character AI's and he basically told the AI that he wanted to kill himself, but because he didn't use words that literally say "I want to kill myself", the AI encouraged him and he did end up taking his life. There is a deeper meaning and alternate meaning to words and a complexity to emotions and environmental factors, that I don't think AI can ever learn. AI gives very generalised solutions. I don't think AI will ever take over my job or similar jobs like mine, because AI can't replace genuine human connection. Imagine a children's home run entirely by AI, or a psychiatric institution run by AI, no human personnel. Just robots. Do you really think that would work? Well it could work but it wouldn't help people. They'd probably just be sedated the whole time. There is also trust, scepticism and faith that is needed when working with people, that can't just be replicated with algorithms.
youtube AI Governance 2025-06-25T08:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyXP-0lXv4Q79kGlut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZU_X54wEkVbSycVN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbmRGBNntkibQKokV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFCCpZdarjZ6-RpIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyo-K2iTKFIkJYFd1J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzvpvbr8Ng-HIoFBw54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw1yOheP4_jzMeiul54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz1T3ROLjiq7Y3Ym6B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm-FlhzIdsLwq6X_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzbpRT3ePZiTCQ6xOt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]