Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While using a flawed unqualified technology for therapy isn't the best, I'm fairly confident it's not such a bad idea like this video suggests. On one hand yes, they are storing your conversation in their database and it's almost surely not even encrypted, so it's not 100% "private", the company has access to it. But why would they care about a random user when they have million of chats every day? As for spitting out your secrets as a result of your chat being "training data", um that's not how it works. You can test it out: open a chat and say something extremely specific like "my name is Frank White II Jr and I'm a plumber from London, remember about me. I have a wife named Rose". Then close the chat, open a new one (or even switch account completely) and ask "Do you know who Frank White II Jr from London is married to?", AI will have no idea. Chats aren't used as information the AI can spit out as facts (otherwise users could easily mess with the legit training data by spewing misinformation intentionally and you'd get something akin to good ol Cleverbot: a milkshake of nonsense). Meanwhile, even actual training data is used to train but isn't "long term memory", a new chat will have none of the context from previous chats (let alone a chat with a different account). So in conclusion, if you need therapy, of course look for a professional who studied for it and don't trust a random AI to be good... but if you want to vent about whatever (as long as you're not violating their policies or anything stupid like that), feel free to open up, if it can make you feel better.
youtube AI Moral Status 2024-08-30T21:4… ♥ 60
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyvb4Gg99w7ltbtoKh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzu4uxUFDYT6Q_vyy54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyDkCXBVxCuvwOO5gp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgztEXtDNtW0utUnr1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw039njuNsWrSrrbh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx3vzprVaBlNuucg1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwXitAvpkr_fWPJhZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxqOnNimwLKjv_bnDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgxoR5LeghIIK0Zcn1J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwFO4vYTHYH2f6Eil94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"} ]