Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Why do you think that Sam Altman and other AI leaders agree that the risk of extinction from AI should be a global priority?" It's pretty simple, really: they want to control the AI market and keep what will eventually be very powerful technology out of the hands of regular people. They're using fear mongering to push for AI regulation, which will erect barriers to entry in the AI market and build a moat around their existing companies. That's how this always works. Ask anyone that actually understands how AI technology works, and they'll tell you what nonsense this "AGI" fear mongering is. Large language models are actually very stupid, and not very good at what they do, but because they're good at making it seem like they're far more capable than they actually are, people's minds are blown when they see it work. But as anyone that has actually used ChatGPT knows, more often than not it gives you incorrect answers or is incapable of doing the task that you want it to do. Getting something to work properly is more often an exercise in learning and adapting to its limitations than it is inputting something and having it spit out what you want it to. And more importantly, it's also about careful curation of the AI training data. Every single one of these big companies is laboring on about how AI models need to be "ethical." Look at how blatantly biased ChatGPT is. This is about the control of information. When we get to the point where a smartphone largely becomes a pocket AI assistant, the user will not question the information given to them as fact, but in reality, the user will have no idea of the control of information being fed into it. This is by design, and a big reason for the push of all this fear mongering: they want to control what you know and think to be true. The reality is that the doomsday AI is kind of like Musk's "Full Self Driving" feature: decades away from even being possible, at best. AI is rapidly advancing, but the majority of what you hear about it in the media is absolute nonsense, being pushed onto you for the benefit of wealthy businessmen that want to control the entire industry. Thankfully, open source AI development is advancing faster than these companies can keep up (see the Google "no moat" memo). I highly recommend that anyone who wishes to have some kind of REAL understanding of where AI technology is, where it is headed, and what is going on with all this fear mongering nonsense to give the "AI Unchained" podcast by Guy Swann a listen. Guy is a very smart dude who understands the technology quite well and covers many aspects of it with guests who work in the space.
youtube AI Governance 2024-01-17T14:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugym0eARlUxFgZ3-JDh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxB574rBySRm4oTMQF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyLcbP8mZgDHqhW1FN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugy6V99cAK_x4JsNCP14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxtvGJxWX1RDN-3qiB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyF_7XtI79e2ZkIfNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzHusGj6aNDNR7vO0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxeQl1HqxwJUOSuDDB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugw52pVXKRkcw5incDB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxmXyJ1MDO9oJjD_254AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]