Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No one apparently pays attention to the warnings of Science Fiction. If they did, 2025 wouldn't be what it is. On top of everything, some companies/agencies seem to be embracing the the dystopian ideas many films and TV shows have warned us about. 2 in particular are Japan and China. China recently launched a couple satellites (can't remember if late last year, or early this year), The names in Chinese, translated to English are Skynet 1 and Skynet 2. Japan is upping the ante, with an actual company (university project) working on robotics and AI. The company name.... Cyberdyne!!!!! Their about us includes this... "CYBERDYNE was established in June 2004 as a university venture to solve various social problems facing the super-aged society. In order to solve various social issues in the increasingly serious situation with a declining birthrate and an aging population, we are focusing on business promotion to create the ideal future by researching, developing, manufacturing, and shipping innovative Cybernics systems, focusing on medicine, welfare, life, and the workplace, utilizing a new field "Cybernics" that combines humans, AI-robots, and information systems." Sounds good, until you get to the last sentence, right? Now, sadly, there are robots that can do parkour. Not even the T-1000 was capable of that. Ukraine has deployed weaponized autonomous killer robots to the battlefield, and other military's are following suit. Remember, even Boston Dynamics started with "Big Dog" for the military. It was a failure at the time, for many reasons, but it eventually evolved into Spot. Given the current political climate, globally, this is going to be a significant threat, long before we actually see it. I want off this rock so bad, I hope aliens are real. The moment any AI attains "consciousness", we are cooked. Remember the Matrix? "Humans are a virus!".
youtube AI Moral Status 2025-10-31T18:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]