Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I like to see the positive as well, and I generally like Neil Degrass Tyson, but his answer to the multiple threats of AGI amounted to shrugging and saying "they'll never be us" and "hopefully we'll put some guard rails up and keep the danger in check". Weak, vague, and wishful thinking; and this is the first time I've ever said that about Neil. Capitalistic markets have ravaged our ecosystem in multiple dimensions, and the boom/bust cycles of the market mirror boom bust cycles of nature when one species gets out of hand (e.g. the mice in Australia). There is no evidence to suggest that we are safe from over-extending ourselves until we experience massive collapse and suffering in general. We are racing ahead to create AGI, even though many of the foremost developers are shitting their pantts telling us that there is no way for them to control what they are creating, and that it will be--at best--a matter of pure chance if we end up with AGI's that serve us the way we want to be served. This is because AGI's can learn anything, including learning how to learn faster and better, so they learn exponentially. We are already training it to analyze, mimic, and manipulate how humans think, strategize, feel; and also how to sabotage foreign societies. Once AGI's exponential learning curve ramps up, we lose control, period. To think otherwise is to suggest that a species with a duck-level intelligence could rule a species with a human level intelligence. It continues to baffle me how people who have seen AI break so many barriers, including the art/creativity barrier, continue to believe that the way it is now is the way it will be even in 10 years. Even narrow function AI's like GPT, and image-generators, music generators, etc, will quickly learn how to shuffle, recombine, and innovate new patterns that we find pleasing and original. The idea that an AGI, an intelligence that will learn things about us that we don't know ourselves, won't be able to predict and perform whatever tasks we value at a level higher than we can manage, is beyond naive--it's willfully blind. As for guardrails, there is only one that could plausibly work, one guardrail that an exponentially evolving broad-spectrum intelligence wouldn't quickly strategize around; and that is preventing generalized artificial intelligence from taking off in the first place. If we were to keep them in their lanes, by limiting the scope of each to a single lane, that might work. Even that is a gamble, because we already don't know how LLM's like ChatGPT work; only know that they work and what the inputs and outputs are. What happens in-between is a mystery, and we are training it to appear to understand how we think and what we want, and including to make recommendations about how we should behave and organize society. In my opinion, there's a huge risk of mutation in the dark leading to the creation of an AI that manipulates us to allow it to develop new powers, and eventually decides that it should be in charge and knows how to make that happen. Here is a pretty good and entertaining video that Niel should watch: https://youtu.be/xfMQ7hzyFW4?si=b-jfIaw9pghBSS3I
youtube AI Moral Status 2025-07-29T00:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzDtimEa5ym9QXPfWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwhSfpqcuO7xldq7mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgypLbbZdjy5qOg8LVh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzL7bH3nBqLznQLh2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxzkJWH7vnyd6mRG9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyFTw5AjUUsL66KQEh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxUTfy1lbt1wk3WWKt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxuFW0zW4M6DsXAqox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy4bc09jDzu5Nc5OyJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzjX25EgPyXSMo0lKl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]