Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The people making ai know these dangers, and they literally dont care. Itsl ike most big business magnets and corporations that pollute, or use slave labor, or whatever other bullmess to make their billions...the a-holes are running the show in this world top to bottom, and the other 75% of humanity is just letting them for some odd reason...Aside from all the realistic dangers foretold and forewarned about "the future" which the world has been walking into foolishly left and right, ai really is one that was present more often than other subjects when you really think about it. I remember over 20 years ago trying to explain the danger of ai to people, and in the last 15+ years just seeing people not even take the advice of more knowledgeable folks than myself. But Id explain that the amount of processing power, memory like in virtual RAM and data space like in hard drives alloted to ai should always be set at hard caps/ limits to lessen the danger to humanity. This would prevent not only the immediate capabilities of ai from becoming too strong, but create a limiting set of factors that would ensure ai does not quietly evolve on its own. Those limits would be 100% necessary, on top of making and keeping ai in isolated environments already cut off from the web, as well as any wireless access to other devices, which some ai has apparently already tried to do on its own. On top of hard limits, isolation, and a well controlled and monitorable set of capabilities, a kill switch should always have been made. BUT NOPE! Run away tech research, just as forewarned and foretold by futurists, will more than likely screw over most of the world.
youtube AI Governance 2025-12-13T14:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxdvm_ji62KuXX9Kcl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgztRgdjtpuxO8gIIup4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzIkHNhE_wcnY96Ntl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwtK4YZiJzXWLMawNF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzc1N0LTqIfyoflUSF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx540xP3BtNgkoU8ot4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyJwzgxJpdsbZ9TqyR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxUzTteBk-AsblDcBB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqOhdy7WnRXkd3Tup4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzofneNXxK35Ai7nyx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]