Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All we need to do is listen to the words of the inner circle of the individuals that are developing and funding the current stage of this technology. If that does not raise red flags, I don't know what will. After that sinks in...look at who they are selling it to, public and private. I have watched the trend of a society that has empowered and rewarded many people within our corporate and institutional structures who prosper in an ecosystem that functions within some level of psychopathy, or sociopathic tendencies, a culture not tolerated as a norm, some decades back. If that is the model for AI development and creation, we are in deep, dark waters. The rapidity of technology and materialism produced during the very contentious 20th century, espicially post WWII, has yet to be understood, regarding it's overall impact on humanity, yet we continue to push forward at break neck speed, with little regard to so many warnings. The US being one of the points of this great technological and material renaissance, today our claim to fame is having the most 'billionaires' yet remain unable to resolve any of the extenction level issues we currently face, but seem confident in allowing the wealthiest to dictate the future and course of AI. Of course we are unable to stop tech development, nor should we, but we sure as hell had better develop a much deeper understanding and perspective of it's use. As i recal, AI has been discussed for about 75 years and the debate continues. I would listen intently to what the fathers of its final stages of development have been saying, they are speaking more from a point of wisedom at this stage, as most all of the hypothetical is long gone.
youtube AI Harm Incident 2025-07-27T02:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxs8YXNKW7STgNEVOl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOugmF6FwknN19yzl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzHJ7jlJsyDeuOl-8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyE7Lv9EjqUXusVb5J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyRvrWGDSqf-JyAOXZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXyvfqff3D8Cy6oL54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgymIxPvlHxxGR5s8PB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_seaS_WZJ6FNT9U54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy26dblunshy6EOzeN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxy3WeqtzQNkL8ypRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]