Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm here at the beginning of this podcast just to hear what the heck A.I would want to do to us that would make it dangerous, aside from giving us ideal living ways that won't work because humans don't want to be ruled or have a ruler anymore because we've finally found out that doesn't work
youtube AI Governance 2025-06-23T13:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxmfr-yt6SnOy5g-gR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzBB4jz5PeLKVkCZfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFBXboTnYCB54gjPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCDDXy2Au46mlEQkx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxEzmSkmWLZ0qNo6jp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxENBZs2kOTR2jtBIB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwZ2u5-_cFaLadGjM14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw9rtdwhxG7VrkvnvB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHLshQ4nwE6xpeaAl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSCnWx1Tr3Sjiab_54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]