Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sure, but only after everyone understands what reinforcement learning is and how it drives tools like ChatGPT - and only after every student knows what a logical fallacy is and can detect it- because, if you did not know ChatGPT commits logical fallacies. The red herring fallacy is the most frequent, and that is due to the fact that it is also common among people.
youtube 2023-05-20T08:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugzgpr7kr5veAlMVji94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyBSJKx-0pj5I09tsB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"concern"},{"id":"ytc_Ugwsp1orJx-m_-CTADR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw5ylGoYLHZaMaqLzd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugycq_yXHaUUP6l59KF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz7ZPNE3I64WxcG3Gp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyH5AdqTp7v3vmW2FB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"concern"},{"id":"ytc_Ugxj32Jw90ZpPlwlinx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyR0zsjVhi4kruOvV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwlYW75e2OBoRpiOtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]