Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is typically missing the entire point. First, we don't have intelligent AI, no matter what anyone tells you. It is probably a very long way off, or it cannot be done, or it will not be done before we extinguish our own selves. Second, since right now there is no conscious intelligent AI, all of what we are basing this distraction on is what we hear these people say and what we have seen and been primed to see. Finally, AI will be used against most of us in out lifetimes, and problem only used human against human. Humans neeed to see that we have a selfish flaw that will destroy us unless we re-design a new Constitution that takes all this tech abuse into account. Some of these kinds of crimes are so dangerous what we have law today will not work - as we are seeing.
youtube AI Governance 2025-12-26T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyRxRfC6xUrMa9NxR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyf2QDf6rBaEzUF2j94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxg00L8q3jOGQxIDNB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMkwZBwE13Nqtv65x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYReWrncYbsPu14ip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuBzD9f_LfexZBuRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLKJt0wHox6zqp-3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzx9XD9aQDZ3MPvgEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz1UQcJgttbjMGsei14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzReM8qceiOUQfhFYR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]