Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Isn’t the core issue that we train AI on the full spectrum of human behavior, even the parts we would prefer not to replicate? Shouldn’t we take a step back and curate what we expose these models to, rather than allowing them to absorb everything… both the good and the worst of us?
youtube AI Governance 2025-12-05T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdXYJofwc2hQQIPFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzwXhcMTI0e0cRW7iN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQghPX53SVstuQZsd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzsE06okQmKKUdyhql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgztYfzCWNpv7lLAAZ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyNx-lF9LIoC5j-nxd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyC7aJYZIU6gA1UWKd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzta-CYkgF8m820Y614AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJ8RslCZx-7UCpKoZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6tAhKgja1KRitBYV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]