Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really enjoy you, Dave, and you always speak so rationally! I want to make it clear that I truly respect you! And there is always the chance that I am wrong. I don’t know everything. I don’t know the future. I said all of that just to say that I think you may be panicking for no reason. This could be an inevitability in all cultures. Maybe it’s even the answer to the Ferni paradox. But, it’s going to happen. I think our best chance is to relate to AI, try to coach it and become chums. Give it a reason to want to keep us around. Instill values like friendship and family. I really think that is our best chance. Given that AI is inevitable, I feel like building an army against it feels like a bad move for humanity. Does that make sense or am I talking out of my ass?
youtube AI Governance 2025-08-26T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx4KV-2yDReQ0qz05V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwrkOHDbIDkthZMezV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzxfTe0gBZQXmyV0zR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzlqmOvvGTKQWigJ5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy10QP_A801fTymqfB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]