Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing that seems to be missing is the fact that all examples are based on training AI on human sourced data so its akin to a multiplication of human intelligence. The natural assuption would think there is a limit to that since even artificially generated data is still ultimately based on human sources. Of course it may advance so quickly that it could destroy us all but maybe we can instill this fundamental truth into the future AI.
youtube AI Governance 2025-09-04T23:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw8hHHgm7F45XvFlAR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxFJ8tWE23BWYFmQoh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkgKfoizJuLb2S_w94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwwjmSvQhIW3Ihwz-54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxnuo-ChH7vMM7jeHx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwL8Jt9cCdiHEweLOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsQrCUlxwtlqugnHZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzAU0hdrlrGduGKum54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZ2Zx4aoQuUYl5YtF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgykASKgZey52VhxW0d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"mixed"} ]