Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Whether AI develop to want something is a big question. Whether this something is incompatible with human existence is speculative. Based on these to make predictions of doom with this level of conviction is irrational. Hard to see these people as anything other than anti AI cult trying to validate their dogmatic views.
youtube AI Governance 2025-10-16T07:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw87q9zbZpHndyBToh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgqUoJfgOROg0z8v94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxHMLXRr5iWJK8TPC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxBIFZaYUxl9uHUwgR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy87F2xB863izA0VL54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxmWvMLtxhgNNtrt4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxdNMxsQXIFl-zWOat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgxJfABiBpM6-L5_NN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBi8g74GabhQa4XVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwaSWTdB9heV1BMBnd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]