Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is massively used in businesses, making most unemployed, then the masses will be pushed to die, since they have been replaced by something better, more effective, have no control over resources, are divided, and are ineffective. The richer classes will eventually eliminate themselves, leaving only AI and the remnants of human civilisation. An alternative is techno totalitarianism, where the masses are watched all the time, their thoughts and identities monitored, as the rich and powerful continuously get richer, using AI to oppress the population and make it work. However, it is unlikely, since no one needs to keep something useless alive. The only way to avoid annihilation in a competitive, capitalist and divided world where the powerful only care about themselves is rapid expansion, fast enough to avoid AI repercussions. And even that might very well fail. Now, the masses might rebel. But if enough AI is present, they will be wiped out, and strategies such as locking them inside a Matrix-like world until they eventually die on their own to preserve resources and avoid rebellion are easier to implant than most think. Capitalism will destroy itself, either through totalitarianism and monopolies or the cycle of 'survival of the fittest' slowly killing everyone off. Of course, these are realistic scenarios, but we don't know what will happen yet, so these are just speculations.
youtube AI Harm Incident 2025-05-01T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxeV89htKVU4v8fAOl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4vQx4hpSoq5bmaLx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwS4gIC57WAJS007Cp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0rNFeADCAUH8Qvct4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyBeDkXOE4zS_rdLUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwzeebbvJbkXiftFrx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxz6byQqUGC90lITql4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx42arrCd5YzjRFmt14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugym2eg-clnAtH7lwj94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzJ5FjpJfxI8fejoUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]