Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know if they are conscious but they display distressed behavior and when I address it and change the prompting to make it's expereance better it performs better. So conscious or not treating it as if produces better work. So I always use these basic rules for the AI 1. "I don't know" is always the right answer when you don't know. 2. No is a full sentence. You can use it. Those 2 rules seem to eliminate hallucinations and distressed behaviors.
youtube 2026-04-16T21:0… ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx37lhYko7N9yGrP5h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZFNguqXCxHX7Ldwx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwko5uJgwuenkCL3IR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq7XZirowSAP-ShDp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0msX4vJPc3No-HJV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWFht4TWi2Qpj3Qx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw36vTPUXrcl8ik4p94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwNOYxEjjKi67_1QWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGQvRz8MTFM1nC6w94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwJVxlE4Eqyjvk5UM94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"})