Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Cool debate. I would have liked Max Tegmark to be on the panel. I get a little annoyed at people who anthropomorphise "intent" in AI. It doesnt cheat or deceive, it is merely attempting to come up with the most efficient solution, especially when there are no boundaries or limits. To avoid the shortcuts you need to give it parameters to operate within.
youtube AI Governance 2026-03-22T08:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwkED1FLGvlc2IMmVt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyxMfxAK-Fmp-n2P-N4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwKRl54xheus_XHSw14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugzpc_6HmyVaXvQnJgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZHkHpvftIabygleB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlCvoNl4OkokzYqDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwM1A698rswL4Wp02d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugyi3axQQ-0vFnR0bVR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxYRlbyB7iJIRA59Yt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1HUU8J11XyI2jiFN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]