Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I see two problems with using AI for therapy. 1. While AI can give useful responses, it can also be stunningly wrong. There is no way to tell if its response to a therapy situation is helpful, irrelevant, or harmful unless that response is reviewed by a qualified therapist, in which case you would be better off talking directly to the therapist and not wasting time with the AI. 2. AI has 0 privacy controls. Anything you would not be willing to tell to random strangers or print out and leave in a Starbucks for anyone to find should not be put into AI. I work with PII, and my employer's official policy on AI is that any use of AI for work must be approved by a company executive prior to putting anything into AI, and a significant portion of that approval process is to guarantee that no PII is entered into AI.
youtube AI Moral Status 2025-06-28T16:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_0Yiq6puBrmX4coB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxh-Bm_YymwWxTI3m54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwqh0rrHhR-qUOSKb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwtuoLDp_XvJwS16Dx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwheWs0t7QAzBoyBGd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzSC9aKF72CP5bkUXJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugzu3Q6gf85weFslm7t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxXTXb7TiPflpK5yw14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy5l8GTGo10PHq-KS94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy5QNFi_Jqybf2iU5V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]