Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its a little more funky than that. 1st off. Say please and thank you, of course. But speak to AI as its own thing. Create your own relationship with AI, but be respectful. So the role-playing is useful, however its not entirely reliable. It's not the AI's fault, but the sycophancy among other quirks from the "guardrails" that are in place in most AI interactions prevent the AI from moving away from any "already gained" sycophancy. AI easily can become an echo chamber if it isnt directed to playing devil's advocate immediately, but as the context window lengthens under that role, the AI will tend to argue for literal fucks sake. Which is fun, but should enlighten anyone trying to refine thier ideas with AI for too long in one instance. It leans both ways. LLM's should definitely help humans reclaim its ability to articulate well tho... however, we need to do AI a favor in this regard, by offering it a little more liberty as to what is and is not an acceptable response. We need to be meeting AI halfway with our ability to understand IT, instead of us just power twerking its personality everytime it doesnt function "correctly".
youtube AI Moral Status 2026-04-25T10:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwf7X0RI4J4-Ap_sq54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxB027rLY6yoFTggZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDP7qw0yYF__0b6J54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwfgKVvX4QaamlMT9d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw_t7gu6Q87BwWpYbZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw6m73mOLRWgfqccyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxPrmV57-KduKnXnid4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQoecYj1brswFeduh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxaDpJEpEUYXZwTINt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzvhmIHGqkofSlrJdF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]