Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we’ll be sitting here in 2027 and we will be just fine. I don’t doubt there are dangers of super intelligence but I disagree on how realistic the time line is, given the time it will take to adopt AI meaningfully into the global economy. We assume, super intelligence will act a certain way toward us, but given the guard rails we set today, or lack thereof, it could very well behave in a very different way from what we expect, and there’s no guarantee it will be able to solve all of our problems, like many are speculating. Will it have emotion intelligence as well? Will it think like us? Will it be able to perceive time and communicate context in the same way as us? Can it match human intuition and the ability to make complex decisions intuitively? I’m not so sure.
youtube AI Governance 2025-10-25T01:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz0grQ5upHaZFDapD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_snWFzPmg0TVW0OV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwX-okwr4dXFqcWZE54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwEZJSl3vbZ-Yvpc8B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx8eljpjyhCuMyqdJR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZtIBEX9R1qSNC3qx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1lCFxrppq4zPlKaB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxA9rdoSNQU8mukh5t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwBpoNMwDKgGr5aNnx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugy7KsaGGua8Sn_TOYx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]