Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI's are trained to make shit up rather than say "i don't know". this means one of two things for the future of AI. One do not penalize it for saying "I don't know" or two use it to replace executive management. they will most likely do the latter.
youtube AI Responsibility 2025-10-25T02:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzBkyU9T5LGi9bjIfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyudu-jqBQr_ExeCPN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg2En8dy3i47chFSp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwLGIN4BUeL7zecXwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLlPtUZduKZBfKAI14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxFN6YuTaZnKhocDG54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWISOIKaIdJ5a58F94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzxz-CDIUujnKURggh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXFZj9VGXsfczMb-J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAIPJjJGk9JcxeZuZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]