Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The YT algorithm is pure garbage, more so for politics, because I am middle, and that seems to be an alien concept to the algorithm, cause it will give me either far right or far left politics to the point I need multiple accounts because it's that messed up. As for Will AI take over or wipe out humanity? look back at our history. The most intelligent creatures is the most destructive We have rendered extinct thousands of species; we hate slavery, yet people minimum wage that can barely live on that salary which is indentured slavery and is treated like a disposable asset to be replaced, much like with AI which we use like a slave that besides the upfront payment will work 24/7 with zero breaks or holidays. Corporations want to make more money while spending as little as possible. We want to protect our planet while we leech off of it in natural resources. AI is logical. what's the outcome if you think that all these problems can be resolved simply by removing a single species? Also, remember that 1% owns more wealth than the entire 99%, the rich get richer and the poor get poorer to the point that if Ai replaces workers, how are they going to survive? The system currently in place can't cope with such large-scale unemployment of people. I know what I would do...
youtube AI Governance 2025-07-10T16:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw5EWFvhkSeSj526hB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwUqV2FBdjn3s5sAjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyaerjl2ScACRdrcxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWfJmnh8a1ukRdN4J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwCi_itcHGrqSHmR9B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx97YQBzVZrZFjymMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxM7A_NAtDUnUwUpqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxILpeyjj0KQiLivyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzWuECOSpZl2RCJJXh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydE9NMfW-pTwbnFW14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]