Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is there a point where AI can trick people into entering a matrix and the person not being able to escape? If so, how do we know we are not already in that matrix and these revelations about AI being a danger are not whispers from somewhere outside trying to convince us to break free?
youtube AI Governance 2025-09-04T23:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw7FBoLtgtopqUnAih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxgXCukllVR-ZPD5Dl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxgKKXfO_0NQHauAsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwgk-OJUl70qnCasjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxQFyALFduw2Mvb0np4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxTWz5NTtTgmuZG-hl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwt6iI_gzJG7KL1zGd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-_M--S3W5MU1Otyx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycxafnU5SjuB7vOOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyb2uaL7CkRmGkt6pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]