Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is why we need to STOP using words like "hallucination" and other junk that humanizes these programs... It's hilarious how the "only proper term" for an ERROR with 'AI' is "hallucination" but with humans, we say "error" so often... "human error", "I've made an error"... Seriously? Where is the INTELLIGENCE in any of this?
youtube AI Moral Status 2025-07-09T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwTLnS29z80VNUjqil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBLth51ufrFtrpfnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRPQcE6T1iMdO-d6p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx5m8ECYGQGCFVDXJt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHJ1-4zdQY_pFUf3N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxWxhHdAa6mx0gfW394AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlSKKpKgtTWw48N6F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0bm7EL0MazxzOwnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJtm8GduFhIefD9pt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZ6yCZGDb8_ogcyq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]