Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I appreciate your analysis Dave but these existential threats are very hypothetical. Only when we see real evidence that AI has the desire AND ability to cause real harm should we react without haste. Until that point, the economic threat of foreign companies is a greater concern. Some supervision is fine, but significantly increasing regulation won’t be in our benefit until evidence suggests otherwise. Not saying anything you said was wrong, that’s just my take
youtube AI Governance 2025-10-09T06:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyZDPM4pKyxsj7kC154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyIySXxAB9h1Eeuqmt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxrjq1IhvJiZisu3G54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyHrfN5A_GYFKpm6u54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwCBss1IZzKmiFCcSR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]