Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​​@roch145I have to agree with you on that, I don't think AI is going to be malevolent towards people or take over. But it can still cause harm even if it doesn't become malevolent. My point that you can't separate AI and people was more a response to your initial point: " I don't think the risk is AI causing harm to people. The risk of people using AI to cause harm." I don't think you can separate the two. Nor am I sure how separating those two into silos is going to help us out. AI is created by people, so then AI can cause harm. Essentially we are in agreement it seems like. But it's more of a philosophical perspective. I think we may slightly have deferring views on. Yes, guardrails and training can help, but this is where what the person is saying. Makes a point how we train the AI to have maternal instincts can be helpful. This is not to prevent it from taking over humans but just behave in a humane way..
youtube AI Governance 2025-08-14T16:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoNks23oYM","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoQuqUxpxG","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoaWrfsJX8","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz_cITMkV0Ru-Ow3NV4AaABAg.ALoBbi7hYqMALoiVGl4u4X","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyP4S6itfDbysIxEfl4AaABAg.ALoBAt_bL_XAQEWY7RBVMT","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyP4S6itfDbysIxEfl4AaABAg.ALoBAt_bL_XAV7aczyJMkz","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugwgch7-6-py8POyE4J4AaABAg.ALoAM4cLQwcALoBGm4JO6c","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugwgch7-6-py8POyE4J4AaABAg.ALoAM4cLQwcALoBInJEQ3O","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgycyUcswE6mb4w-qXF4AaABAg.ALo7ba07_rXALo8CxFJ5AZ","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgycyUcswE6mb4w-qXF4AaABAg.ALo7ba07_rXATQAOpLIDiB","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]