Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think you've hit the nail on the head that it's mostly about control. Even if a superintelligent AI is achieved, if it can't control anything, there's very little to truly fear - who cares what it would or wouldn't discover about its own preferences. But in the US, especially over the last 40-60 years, there has been a constant pattern of control being taken out of the hands of human beings and put into the hands of more and more faceless entities - control of what we eat, control of how we see a doctor, control of how we can get jobs, control of what we can say in our private lives without it impacting our livelihoods, to name a few categories offhand. I think today, fear of lack of control is probably central to the American pathology, and especially with AI and the people currently championing it, there is a sick inevitability that follows from our lived experiences. If no one stopped Uber from ruining both private and public transportation, why should we imagine that anyone would stop AI from ruining things for us too.
youtube AI Moral Status 2025-10-31T02:0… ♥ 1
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxPAlFOZf5P9-yFXq14AaABAg.AOvb0EjLhEBAOvrd0LxVqS","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzvjGjIcomV9nHpuVp4AaABAg.AOvahie80oqAOvhTcD3LNG","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzGbjN8CVd00WSMReB4AaABAg.AOv_zD-WUInAOva25XYIQ0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy2qgHGv3OEnQWYM6x4AaABAg.AOv_qIdZzySAOwP-ztNrwP","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwKdcT5U_wNFMATTTR4AaABAg.AOv_9RxO8JkAOvf9CzDXi9","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytr_UgxxdDNt4r7N-76IYD94AaABAg.AOvZhyouJMzAOxdODzVrzJ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxyifxnwY34q0k-lVh4AaABAg.AOvYrL2dXhZAOvZNoR1n4X","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxd4oIylRKGfL8P14N4AaABAg.AOvXWYOgdu0AOvb40fhEtB","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx16JT_uPYdtqD6HCV4AaABAg.AOvXDY1Mjm0AOvXMORTirV","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_Ugyu6z4Pp0svDkQdioV4AaABAg.AOvWlkghdIeAOwCOUTiPnj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]