Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why does it feel like these predictions always focus on one variable? The human society has its own dynamics, humans will respond to progress. There always practical issues and feasibility issues. AI might be very good, but at what cost? Is there enough energy to have 8 billion agents doing the exact same work as humans? This cataclystic predictions do more harm than they contribute to discussing real issues
youtube AI Governance 2025-09-04T22:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzJftYj-AQwVx6dxzl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzYJtEbQwdz-dR4X914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwshZsHQi_33KpIYy14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyD3-5P3CugzLlSIXJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWvVS5mvdawTVqw2t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy2TO1b1krcfhaKzHh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxn9RPptYRzqvXBd6B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyBfdMYwpN0gWi8z0t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxHJVnjYBZmrvnldjJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzH0Wt5Q2vRdFJ19Bh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]