Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are a lot of things wrong with this, but one thing I've been learning about recently is AI's propensity for hallucinating/lying/making up answers to things with complete confidence, and only backing down after specific pushback from the user. This could be disastrous if used for teaching. As if the current educational system wasn't in a bad enough state.
youtube 2026-03-30T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwxbjbwY-Y1S09F5RF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy564-Ejk34vxpLhbB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzskqYVofSqp5CLI954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzasV4jYRtEIN2II8B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxKMC9ugezNV9gbWwp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyxE8fWkyphqaTN_7x4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzzozLXfK1OuPiPrt94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwykM52DIwHyRsEKD54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyoqqHUMvLCQyKuuBp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxO9HZ_G2d9oY9hY_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]