Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On flattery: You can turn that off. There's personality settings in OpenAI's model, just set it to "robotic". Then if you prefer you can tell it to disagree with you whenever reasonable evidence would support the position. It becomes a lot less soothing for the ego, but a lot more useful.
youtube AI Moral Status 2025-10-30T20:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwQqxQpotEDfXAihYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAYICGUNLYsG4iY4t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxI_dehdpyV8pC13XB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxIxMcI7nMFQNvy2Y14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3g0ts866w4vJEXDh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyXDx6-Fp8g5M0EkH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2z7Y-c56R2EZoEDt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjLpclMnOB4psR4914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXA2z3Hs6VgESpiah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxTUBmnBVxL36pBFTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]