Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the biggest assumptions is that AI will be infallible enough or management will be lenient enough not to blacklist any particular AI software or even entire software development-teams whenever they start making multi-trillion dollar mistakes. That and managers would like to have a human scapegoat at every level so they can sack the human rather than the software.
youtube Viral AI Reaction 2025-11-24T01:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugywf0zTfBkiDcqtslV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxay0gXX5fij0L9C-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxJ-Gj55gJQkKBN_Gd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy9oQ8CGhjUxkpsP354AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHJfqg5p-H0CH83lV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzspZxMm7lkBQgZAZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxNZs6xonl_HbPacPR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybIBgDd9kRoBgTTXp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJp0jE3kveTfKocf94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8cdRJrzOluPpYJQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]