Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
so large companies are driving Ai to replace humans, where will the revenue come from? once that happens? in a monetary driven ecosystem? As most large companies would have the monopoly in the production of food, transport, medical etc. In a short perspective it makes sense as monetary rewards may be claimed. In a long term perspective it makes no sense to develop Ai for a monetary perspective, as all large corporations will then be chasing old money, pensions, investments, land, and so on which will run out quickly. Eventually people wont have enough to survive unrest and civil wars will break out attacking these controlling forces and control will have to be maintained, prison, shock devices, death, well you get the idea. And we will end up in a futuristic waist land or global prison. (So thats years from now most of us wont even be alive. but if Ai is not developed to benefit all humanity on earth then what i've mentioned is inevitable.) If the unrest becomes a threat to Ai it may think the best course of action is for it to take over. i'm drawing my example from a perfect economic environment and well the world is not a perfect place and thus, the unrest caused by developing Ai in a capitalistic mentality may result in the unrest allot sooner maybe in our lifetimes. Evident in the Ai arms race as top thinkers have already drawn to this conclusion "war is inevitable" not with Ai and think about it if Ai is not developed with a capitalistic mindset what will the reason for countries to go to war be... because why would you hurt something/someone... that is helping you. (wow there's a million views idea for you 🙂)
youtube AI Governance 2024-01-30T06:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxt24K7wrZ6VTC9VT94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzRsOO_HboUkgGXnpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0jyGQ5nArHq61CyV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCBjB7BxlyOtpq8N54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxlFRKTyx9E8XuIVGF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzBrxku7icoduZqAh54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydJ6UjoO6N18aJust4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugycza7bCmNvIuCZBOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyFlZ2DUj8h5hznWuR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgweCcz5BPxx0i7R8I94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]