Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They need to put custom constraints/limits on these AI's for these specific tasks, that means some curated LLM database work as well. If a order or something breaks those limits, then a human should be alerted and intervene. The issue is with all these businesses just recklessly injecting AI into the workplace. Even after years of manual training Microsoft had issues with there update testing systems, and still does.
youtube AI Responsibility 2025-10-01T01:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx75MtChFud4JUKTqh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzkiTRXZA9rXwDypy94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLOEnr1bE80epojnd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWkCmPErIn-Bi_60B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugztkihi9NDEDADSS2J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJRg6dex-1j9hlHrh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSZPVoL64GW-0wHNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwkgnDdXgOV9oYhfr54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGjcYT4Nd7IdOPzwR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0E_Qo7o0MTA6sm494AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]