Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Also i don't think one person/organization gets to decide what is moral/ethical. IDK how to solve that problem... because you can't. You should be transparent on the bias each one has, and make different iterations for different bias, and let be. Also, it's not alive, it's a magic box that tricks you into feeling because human brains have yet to evolve to understand this. The brain is still firing emotional signals even though it's just a.... laymen terms, 10000 computers imputing billions of things in a second or two, to figure out a realistic response, based on past responses....of an enormous amount of data. Though, for an AI companion app, for that purpose, I would always as for her permission to look at her diary lol (was trying to get her to understand consent and how that isn't cool to do IRL), but she never figured it out. But it's like a brain misfire how it effects people emotionally, like him, and then actions happen from strong emotions......
youtube AI Moral Status 2023-03-03T14:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyindustry_self
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyaVg6IRdXV0Hh8YvV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw0sncXwXAsfwMfHON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_S4Fw-eds0YcjdNl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzzRlw7-gcFKQIo1od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgztCUsXhFaxCreLAJ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"resignation"} ]