Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Theres something very strange about this man. On the one hand, his credentials mark him out to be someone very intelliigent, who has great capabilities, and has accomplished much. However, some of the things he says is just stupid, or contradictory. Like the whole thing about how AI is thinking, how it has emotions and so on - it is true on the surface, and it has a similar result, but they don't have emotion like we do, they don't really know what something like pain and humour really is, they just know what words to select to describe it. In that sense he is a very externally obsessed person - what happens on the outside is all that's important - but more than that - if it appears on the outside to have emotions, then it HAS them, or to be conscious - then it IS conscious. And up to a point its true, but beyond that point, its not - there are many ways that AI isn't like us and will probably never be like us. We should also realise that people are conscious in different ways, for example some people have aphantasia, and so we should acknowledge that, because its not meaningless (we don't want people 'defining' our consciousness who may not be conscious in the way many of us are). We have an inner conscious experience and its important, and I am getting a bit sick of people like this saying "it doesn't really exist" or that its somehow trivial and unimportant, some kind of means to an end. Because the fact is that if you put a robot which avoids damage (pain simulation), and a human into hell, we both know the human will have an awful conscious experience, but the robot won't. That's pretty damn important. Also he talks about how he doesn't feel guilty about helping to create AI, because back then, people didn't know that it would progress so quickly that it would pose such a high risk so soon. But then he goes on to say that later, he started an AI business just so he could sell it to make money, then worked at google, again just to make money - and none of that was to do with making AI safer, it was to do with "improving" it. So its not that he doesn't feel guilty because he didn't know it'd progress so fast, because now (and 10 years ago) he knows that, yet he still works on it. So a bit of a contradiction there.
youtube AI Governance 2025-06-21T11:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzOSQdWH23D6PpCdJV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFslA_aSuePdbEE454AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBsjB08AuVay7ieyR4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy7KOsFbTHO6PUJ8EZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxg0VCZki_YmuzafzJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxt-b8135SNcbeRe6V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzrsk1LJ1E3P66pIDl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqJX8mpg4ZLP7Fbd14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw0A36SZFLCvjVr44t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-wNhObWspWHF9WVh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"} ]