Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reason AI doom won’t happen is the profit motive. AI doom scenarios ignore the most fundamental driver of technological progress: the profit motive. Whenever an AI model underperforms or fails to deliver value, its creators have every incentive to retrain, improve, or replace it. The market rewards better performance, not destruction. As a result, AI systems continually evolve to serve human goals more effectively. They are tools, not autonomous threats. Even a hypothetical AGI would operate within the same economic and feedback constraints as other AI systems—its development and maintenance would depend on human goals, data, and market incentives. If you disagree, feel free to define AGI and explain why you think it would escape those forces. Humans evolved independence through biological and reproductive pressures. Tools, by contrast, evolve through design and human incentive pressures. AI/AGI beeing a tool is subjected to incentives that stem from human interests, not self-preservation.
youtube AI Governance 2025-10-28T03:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]