Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Part way through and this is a REALLY interesting and thought provoking talk. My caveat so far is his take on 'social media' and biases it feeds at about 21 minutes, he says that the job of deciding what is good for society is that of politicians which particularly in the current climate is patently wrong. As a 'conservative' when he refers to listening to the BBC and reading the Guardian that is a red flag to me which emphasizes the point about who gets to decide what thinking AI should be promoting. His ideas on the social influence of AI are flawed just as no doubt he would think mine were. Getting near the end and here's another flaw in his thinking ( not trying to be arrogant here, i know he is much smarter than i am but being smart is not the same as being perceptive) is as he admits he is a materialist , that means he doesn't believe there is anything outside of the material world therefore he reduces humans to simply a point on a scale rather than something unique with qualities outside of the simply material. Interestingly of course he is also not really a materialist because he values interpersonal relationships (why?) and in that he is borrowing from the non-materialist's worldview and he thinks we should be protecting ourselves from AI but again why since as a materialist anything smarter than us is just the next step in the evolutionary chain, humans have no more right to survive than chickens .
youtube AI Governance 2025-07-05T10:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyTbPPLe0_1Jm6R5554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyudsmXDch-AyG7DWh4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyh7VdsLlCGTRlLNPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRcEmO2rEAzqFLT014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTeYXwQl43o2sIjz54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz7wTivutkRmh2wPmR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzlwosRm4mNaCOhwMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVZx_-yCgeoD9Fcxl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyLvtgnsL2-OYuome94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgydW-in4CmERicjdx54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]