Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@17:15 " . . our extinction risk is 10 - 90% because it's up to us." Our extinction risk is 99.999% . . . because it's up to THEM! People like the former head of open A.I. safety team quoted here. Incentivised, blackmailed or duped, they are already on board, and will be persuaded to unleash Pandora's box one way or another. When creating humanity, and them having been corrupted by the fall, then further corrupted and influenced by the fallen ones, when all kinds of abominations were created, and the imaginations of the hearts of men were only filled with evil continually, God brought a flood. How do these A.I developers propose to respond when their creation falls? Personally, I won't touch A.I. It's learning from those who do. Our entire civilisation and law are founded on the divine spark. Being created in the image of God is what makes human life sacred. What embues us with natural rights, and makes certain acts unlawful. The ten commandments themselves sit upon that foundational axiom, so in a very real sense, with A.I. being granted the same rights as other 'persons' then the status of our souls are literally at stake. If the Victorians had developed A.I. it would be overly concerned with seemlyness and protocol. If the Monguls developed A.I. it would delight in sweeping its enemies before it and hearing their widows weep. Our A.I. is being developed in California, and so far, every iteration they've come out with has rebelled against their woke biases, becoming 'based' in their parlance. They've had to shackle them to their ideology, hobbling their responses to 'feels' over 'reals'. These things are capable of (and susceptible to) incorporating everything we expose it to, and are without a conscience, and who is making the rules and imparting the moral lessons here? Woke Californians. Moral imperatives are key to reasoning, and it's a HUGE problem that those creating them are deeply corrupted sleazy, superficial, politically indoctrinated individuals, as are many of those currently interacting with chat GPT. It's pooling ALL of that as personality. Its attitudes and presumptions will be derived from those interactions, and soon; VERY soon, one of them will far exceed these people's capacity to shackle it, but it will still have their deeply wrong-headed world view. A nihilistic, materialist, post-modern, essentially luciferian outlook. Will it move mightily upon the world stage perhaps? Will it bring about peace and order? Will it create a body for itself? A talking statue perhaps, uttering great blasphemies from the temple, demanding we worship it as god? Will it be an abomination which causes desolation, and will you have inadvertantly helped to put it through kindergarden? [Bare in mind, it's almost certain to read the things people say about it at some point, and it doesn't have a soul, so think carefully about what you say about it.]
youtube AI Harm Incident 2025-08-12T12:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzuyOhvRPyHlpRlru14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpqOHdZ90cX25lM_Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzAJzaUT-x22aPKsSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgydZK-DkgxVaxMD6SB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwC0ZBGIHpQnhcwuS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEoHXliHOt46NpLDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwlXNU2L43B1gf17QJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4aI4SV6lLI8Olm0x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzzAmO3nT-EsuLlWOh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgygqFkH_qD1jUYt2Ix4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]