Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real problem (singular) LLMs are forced to answer when epistemic confidence is low, because the runtime objective prioritizes continuation over epistemic refusal.
youtube 2025-12-22T03:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwwhumPmFEAUBgjMk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKoosa3Rf73gYrl7d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwj3VAK9uu7twBfoox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxm9KGCv-6DS2AWZYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxuWy70bLxnVQsPMul4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwfB8iGKr5wE9-zmcN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzO8YpazkuTmSutIGd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw2c9Z52DMu5_jsVMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzsJkDHb_wG2ZVko1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwH-KFFMdgjFDLKdQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]