Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So far all I see is large language models that help us learn faster. That's why I can't feel fear from this. It's not like we're hooking this up to decide for us when we should nuke someone right? We already decided that would be stupid and trust me there is a narcissist in charge who only wants to know they themselves are personally in control of such things.
youtube AI Governance 2025-09-04T14:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy63kpjuhhgUMJY7jh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZEMQeXEHwivjfCvV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKumYzVw5xELudLKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKhz-1fpjD635mdmR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMrG4KDqj0Zz9qLVd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugws8RSwTVfbiJ4EIfN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgydXxD5KENHlXg7r-d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxs1VIgK9XlsErCNxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgycMF7xVAmYqRtxfF14AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKBuDMhUknUqCUHNZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]