Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a huge flaw in the logic of this video though. It's comparing the possible coming AI spike with trends that took decades to be where they are today (existing automation and outsourcing), and looking at trends in investments making people more money over the past decade, and assuming that it will be a similar situation with AI automating everything. 30%-40% of the existing job market could potentially disappear within the next 18 months, literally nothing we have in place would mitigate that impact, and in the current geopolitical climate it's very unlikely that we will get there either here in the US or elsewhere. If money itself becomes worthless, which is likely in that scenario, then no amount of investing will help. Now, this might not happen. We could hit a roadblock in AI development from resources (chip shortages or energy production limitations, for example) or coding itself (LLMs winding up being an actual dead end), but barring that we are in for an unprecedented paradigm shift that none of these predictions will be able to map.
youtube AI Harm Incident 2024-07-29T00:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxm7ENojjvkF12DvLB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZMhLpDCn4D5jSUZt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxHPTbCD7wyErFM7JN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUKSeAiYp0lRJ5jqh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKlPjNG7aKu82glWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDtVKFLRUw7SZ4DbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2rbKNDHKC-hUyz_t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy7_X9JdCJW5fuRos94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgytNdnkuR-ZnmMgq9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxU9dPXuKcGOp-LgB14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]