Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Steven Levy said something that struck me 14:20 and it's a brilliant point. Demis isn't _trying_ to be a salesman here, but I think his quest to create this thing that will magically solve our problems shows a fundamental misunderstanding in how we currently operate as humans. Steven's point is that we live in a world where we can have peace NOW. We can have a good life NOW. We can stop bombing and murdering each other NOW. But we don't. Let's say AGI cures cancer, AIDS, food shortage, abundance like Demis promised. It's too abstract for a lot of people to grasp how that even plays out; do we automatically achieve world peace because of this? I say this as someone who's been building things with AI for the past 2-3 years and I love AI, so I'm not a hater. But I balance my perspective with the missing piece -- empathy. We have so much wealth. Demis himself is a wealthy person. He even admits that that wealth is not fairly distributed. He could peel off a good chunk of his wealth, use his brilliant mind to help solve some of these problem. And I'm not saying he HAS to, I'm just using him as an example. I think the real problem we'll always face as humans is our own humanity. Until enough people realize that we have more control over our future and outcome than we realize, things will remain the same.
youtube 2025-06-07T18:5… ♥ 441
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugz-WNKxz5yxZO5ufzh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyn_r3e1GTRp6akSR54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxpxXhOw8Gbz1XfTlJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzMZjYqmrY3TtzZFth4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFSMbAHybT81g8TcF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxq4inAcuj1NxsHCCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOAJdWka7a56YHTFV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzi0acRg3dn_r6AgQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyiefXddm_bKng38_d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwQaXN-Yr6ec3LbMbF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]