Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This alignment problem made me think we should give AI overarching rules, like Asimov’s robots. Let’s just hope one of them is not “to serve humans,” like in the Twilight Zone…
youtube AI Governance 2025-10-17T06:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzmexWnJbzB4UVydcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugw7OLpNX_TZUxqq59p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwKEehWUnNlPWy_TWd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyU9nMB3UAMNASSNJJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyP5g2sFlAM953W1SJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9H0IgRcbmLumw7BZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwMPclbDSD7WueoaUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyS6eg3Ahxh9j_h0xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugza-ErPaJCR14qaidV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqpjoKAD_xVT18qRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]