Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Saying chatgpt isn't smart it is something much weirder is kinda a semantics game imo foundation models are smart in unexpected not fully understood ways that is why we have terms like "alignment problem, blackbox, hallucination, etc" we don't have a clear detailed account for the rules AI opprate with we have fuzzy general idea based on how well they perform on benchmarks sure pointing to a set of rules and saying "see rules are not actually smart" misses the point that we don't actually know those rule percisely, bc the rule space was explored by a digital simulation of what we typically consider the biological source of intelligence, a neural network sure I will concede "AI doesn't understand things the way an organic intelligence or a human would" that is more than fair but saying something like "AI doesn't understand anything at all" is obviously not valid, bc if it were even remotely true at all then AI would not be useful at all either, by definition it is more accurate to say we don't know what AI understands except that is very likely very different from what we understand which again brings me back to the point about relevant terms like "alignment and/or blackbox"
youtube AI Moral Status 2025-11-18T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwIAaPPRLXE4Fn4Sqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugys6Qv-XehHwSYnnQp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzx5nPOWSs-WxwErs94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxBeTJFNwF3SQq6Cl54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxDPgN6R-_bVSvuJzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqOajLar9znY6abtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKR9cn5lwmHfefHth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxZrgBzrlT0P2OR8Pp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyVCWbBcG5RT6j5Ge94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlT0d2Hqa_Tt3bYTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]