Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You make the book sound decent and I'm sure it's better than it sounds in the long text, Soares seems well-spoken and he definitely has more in-depth knowledge about LLMs than you usually hear from this quarter. The middle of the conversation is very enjoyable. But the language here is kind of a giveaway. The title of the book and this video, how much time you two spend in the conversation spooking each other with scare-words and tolls of doom. I think you guys don't realize that this isn't persuasive to anyone that needs to be persuaded. Even as someone that supports regulation of AI and doesn't like OpenAI or xAI, this video often felt like watching... Well, two people in a shared community nodding to each other and rolling their eyes at people outside their community. I'm sorry, Hank, but watching you cringe and groan as the phrasing intended to sound negative makes you uncomfortable, it doesn't look like a serious discussion. It looks silly. There are plenty of legitimate concerns about AI, and certainly about capitalism's use of it and the corporations at the top. The concern about apprenticeship is a good one. But a lot of this only passes because people don't challenge it in your space. I encourage you guys to be more critical, address more talking points that don't support your position and talk to some people that don't agree with you yet aren't CEOs. This is not the progressive's mindset of curiosity that you once admired.
youtube AI Moral Status 2025-10-30T22:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]