Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My takeway from this is... kinda flipped. I am not worried about AI. I am worried about people with access to AI. AI seems like a great set of tools, and I think ultimately it would be benevolent. The Dalai Lama once said that the ultimate expression of selfishness is complete selflessness, because the best way to have perfect happiness is to bring happiness to all around you so you are always surrounded by happiness. Humans though... give us a stick and most of the time we'll be holding a club. And AI is a really big stick. On the flip-side, Terminator 2 also seems to suggest a possible solution. Individually we cannot resist whatever a Super-AI will do. But if we build community-AIs (per neighborhood, online community, etc), removed from central controls, each responsive to and responsible for a community, the overall sum of AIs will oppose each other. And as long as cumulative humanity is 'good' we may have a chance to resist the bad actors with this club. But if we let governments regular and control them 'for our safety'... well, governments don't really have morals and they don't have friends, they have interests. As 2025 is teaching us in the US, brutally.
youtube AI Governance 2025-06-19T08:3… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx1VycVHCGFi8bzAbZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugykh-a_TtmyX4KKr3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw6M8lSbiH3hnrVIuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0HkYLdltjTrCeoi14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-IY1h8e9xOKmImSJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxYt01ZwYF13Vij5LZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFV5avURTD3JZ7VmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqDaMXWrPFCIP3XLR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyKMV3gBeRpEmOpqeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxH1SiHBQFdpOQ34Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"} ]