Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it really comes down to accountability. In fields like medicine, there's a subconscious comfort in knowing you can hold a human accountable if things go wrong. But with AI, where disclaimers about potential errors are common, who do you hold responsible? The AI is just code, and the company can simply refer back to their disclaimers, leaving an accountability void.
youtube 2025-06-25T10:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxkGDwvBsMT18gAahp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyBFVMpEpS_cxmYKzp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgysgzugoLaKCBjVYoR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugws9Q3hrxuIHvqaIgd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzyiIjIEo7qW1aRxwV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuD8rVDOxKX3_O4oF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYPfOwNM_LBMBOmih4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwPdNeJIDK9uR8AQdZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_2jexPVnkPlJURix4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyXyPYL_R_SjUisAs54AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]