Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When you talked about things than we might be better than AI, I have an idea: if we can make AI busy figuring out things that can’t be solved (Pi/human society rules, universal rules), which have no right or wrong answers based on percentage fluctuations, then we will still be superior and in charge, right? I mean subjective human will always gets its ways to determine the answers.
youtube AI Governance 2025-06-20T23:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzW7ZW_QRt7v0FSS2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzVTm4qN2rAaLOF0YV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXrLllyREYSJ9Nv9l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyOGpoBKOC1aSGc6I54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyeFGisZ_Fzri1UFnZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkhDZSIKiBzEMD3U54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZRekPL17JWGaEuS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyAT0-j_B8YeEa-sbl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyufCUXULaNGrEfFX94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwDsLHPq3JDUcQJgzt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]