Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Don't give them unfettered access to critical systems. Place a human between the AI and the systems we want them to improve. If they are so intelligent that they're able to get around any safeguards we can image, you won't ever be able to ensure that they are aligned with our best interests.
youtube AI Governance 2025-12-04T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyiA9v4pgY3e4FumYZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyHpYeAINzYkW8mjdh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyeIif_inWmSeJOIGF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyfcOsZT7Cts-m5DP54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx30Hn3ZJJ_8LndRf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyaUstVfbcHNoTT7pl4AaABAg","responsibility":"elite","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwgnYZkp7AAjFa_cpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxgQOlkSzYjtADLAgF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwVyRI5sEUItR7zGQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw9P0E5nuAu9-_anD54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]