Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "cost of errors" is not a new problem, and folks have been studying this for a very long time with all sorts of technlogies. For better or for worse, what we call AI now (i..e, LLMs and other DNNs), is sophisticated automation - the black box nature of it means DNNs are very unlikly to ever be 100% accurate. I've found it helpful to take this lens when working with AI.
youtube AI Responsibility 2025-09-30T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwa2Kmr-yDoZ4RZwpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEVvTlbZDP31vEjiJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdxTJcMlE44fYFh014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzLMvWr6EkdXfh55lZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwKLIoTIZupR-digid4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx99WGi7UEpXHaeDwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzBnJ2XE4geSVJqZtF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyTFNb0SUHmJMG60nN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwoDM_JGyYAuiK3KWx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwHUJT_8ejOURstH2R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]