Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When he talks about two main risks( people misusing AI, and AI on its own becoming malevolent), he misses a third. The third risk is that people will become too dependent on AI and won’t learn to think or do for themselves. The third risk is we will devolve as a race into something that can’t survive without AI.
youtube AI Governance 2025-06-16T11:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyrYsW595L2-_DewEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3pSftKWBS-32uk8t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx7-GrZRJnqy1URviF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyTp6iSIWtvOHk7rex4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgykLi8xICfWHVZK7q54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyUAy6tqRGL1IW_YD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkBfq2w3k8ikofV0J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzcJ5SCXtQChZFoZhZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6297PxIITLI0LC914AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwl7_AoIffXxSKy9Oh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]