Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I like that he was humble enough to say "I don't know". There is no way we can predict how beings much more intelligent than us will act. We don't even understand our own brains fully. Honestly, with all the tragedy going on in the world, I don't feel like getting scared at AI right now. It could it be that if we try to chain it, to control it, that that would be the very reason it decides that it should get rid of us. And then this all becomes a self-fulfilling prophecy. I liked the tiger cub analogy. It might all depend on what it learns growing up. So we should raise it lovingly like we would do with a very smart child. Sadly, if it is learning from the way we communicate in the internet, the things we do to each other and at the way our capitalist system is built and makes us behave right now, the development of its empathy might not be at the top of the list. In any case, let's don't go be a plumber just yet (unless it is your dream to do so). Go do something that you are passionate about and is fun! If we end up jobless in the end, maybe we will have more time to spend with our children or helping out old neighbors do their shopping.
youtube AI Governance 2025-06-29T20:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwquaengYT7QHoDOEZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFLKyHSySKN86fECh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgybcpXvEPsiR2dLplp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyIo9pPPyM6ohJVNNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy9OkmL5CKKMrrp_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfnWk5mo4hL2UtfkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgykxVOuJeZi_UF5-Pp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxI8JpxZec8Zr5NfO94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzUi4BjIMB_WSdiUDp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzCp5U50_1Sab8zm514AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]