Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you even know the basics on how LLMs are structured and how they use and configure information you would know that it is nothing like human researchers taking and using information, LLMs take information verbatim, there is no transformative human element
youtube AI Responsibility 2026-04-11T18:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTxTkBU_9qWAF956l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyt3NsVGWv2_R1PeV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxkaz38EyNuxuxOUWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyS5FXtBQv_V-pJGD94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZ4et40pdS6j6uYId4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwoMRM3cifFAukQ_SJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxtUS6avwrf0-xBCIh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyiEvy9U2o92Zww7s54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx_s3REVcDRhsVvcz54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwgg-n0Bn1pWLh9ndx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]