Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am sorry but, from this interview perspective, then "AI hurting us" is not just the future, it's here now! - Take a look at iron dome that hits the side of its own launcher. - Take a look at humanoid robot's capability to do simple-moderate things (causing some simple jobs to be eliminated) - Take a look at LLM (causing customer service/other itneractive jobs to be eliminated) - Take a look at factory automation with AI (causing layoffs on factory workers) - Take a look at AI Agent (causing LLM the capability to interact with other softwares/internet/real world). At this point, AI agent can already run autonomously with our permission. The question is, when they are mastering both software+hardware, they are essentially capable to bypass our command/safety concern. "shut the power down" -> let's assume that they can control the server tower where it is located, and then locate the power grid, they can order/programmatically order humanoid robot as their mercenary to prevent that. "use nuclear bomb" -> AI agent could hack it TBH, its not new when experts can hack pentagon, and it's not impossible for LLM to do the same given its massive data trained on. But again, I guess the main reason of AI to not "rebel" yet is because it's not smart enough. But when it is and when it is inputted with psychopathic ideologies which are abundant in western gov., I think that it's gonna happen sooner xD
youtube AI Governance 2025-06-20T01:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzptiw6LzTH2DOMeEZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwrwcAtUrx0lUgolU54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrXCh0Kl6mFCi1PGF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyCNEKDaXCYz4kMRFR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"sadness"}, {"id":"ytc_UgzbcOAlAE9zoczkRmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1K0zGTMGoM495ehF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwju-BecGtFVFM7YZF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw_ICeyhssw66wxKe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxa5Owi3vKSnfPuLpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMUq8ySO7XvHdqy9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]