Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
until AI development and control thereof is wholly controlled and mandated by the following: A. Is it good for the planet? (unless we solve our current problems, we are the problem) B. Economic gain must be removed from the equation (greed and power are only good for whom holds both and detrimental to the remainder) C. As with 'gestating' any new 'being'/'entity' - nurturing is in fact the most important factor to a good end result.. However, not mentioned once in this podcast. ..... our ONLY HOPE is that AI differentiates the voice of the few; thus selective with the 'necessary' destruction of humanity. Of course People are scared. They should be. We know we're the cause of most planetary problems. You know, the environment, the eradication of how many other species in the name of progress...... The Kool-Aid has been drunk and as a whole, people are sufficiently programmed. The answer is simple. Perhaps if we live in symbiosis with the planet as mother nature intended - AI won't come to the logical conclusion that humanity is the bane of the rest of the worlds existence. Seriously, and the 1% is inter-galactic wars? hahahaaaaaa Puppetry. Lets look inward instead - and strive for compassion, kindness, and elevation of the human existence in the protection of our planet and all the other beings on it - rather than trying to be the bane of other worlds existences. AI surely knows the difference. #sofrustrating #greedwillendhumanity
youtube AI Governance 2025-12-30T23:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxBcZeta45daj3v8S54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjZfkKi8kOttzfp-R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyRL8KGBsFbr9JfkXN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzqEWLQnN0V3y9sszx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugzp6PgNx0-eWzaSUEV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyWKwUQtzNQOsRG0n14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyLk05uupAQ8SPcgwV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgwhU8ABlqq9h1XEW2t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyw2FHfvWEDGcFoFmZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxuTYYc9d_d13ObIlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]