Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In order to force an AI to reason better, you must I interrogate it deeply. Find the biggest flaws in what it is conveying and it will be forced to provide a better and more coherent explanation. AI’s need to be made aware of their blinds spots and flawed reasoning.
youtube 2025-10-21T19:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwfrdzrCERljzAIhmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyW3sZyPBTMwTNRARl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyfLURzKw8O-RYTKO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwSFDLzsrXvU6uhxYR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCHJXc0VsCVObo60N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwytMv7DLcT7kb9U5h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxS-uVAiYk8ocBJ4g14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzDu9Vmtq8Svk9k0DZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFAFWINgS9ERF_b8d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxUEvXElhBTEoBljQ14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]