Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People don't get it! They wonder why people who are so super smart or business savvy would create a situation where there is great potential for the majority of the world's population to suffer. Many people think it is just a matter of those leading people having a difference in opinion from the rest of the world in how A.I. will turn out for everyone, and the leaders in the industry sure talk a good game of that being the case, but it is more sinister. They know how it will most likely turn out, they say so in private and some even in public. The reason they continue isn't even that they think they can keep the world running with them on top. They want to ensure they have generational control over society with them at the top by being at the top and ready when the long-term descent into chaos begins. This , to the prepared evil genius, ensures that there will be no more competition for anyone wanting to rise up through the muck to challenge them in the same ways they did and other ways that have been done in the past. In other words, once they use technology and a working society to get to the top, they actually want collapse on their terms to ensure nobody can rise to challenge them the same ways previously done due to a prolonged period of chaos that won't support such attempts to them in the future. They don't even intend to have a working interconnectivity in the world once they achieve their threshold of power. They don't intend to use A.I. to dominate after the collapse. They intend to use the other tangible resources accrued through a working society and technology to remain dominant and have absolute power once those things have collapsed. A.I. is but a tool and stepping stone to their long game where A.I. most likely is only a fleeting player in the overall scheme. They don't intend for advanced technology and A.I. to last long enough to emerge from any chaos it causes to others because that would give opportunity to others to rise up the same way to challenge them. Once the end game starts, they will rule by tangible accrued resources and low tech, such as food and basic tech knowledge that they will dole out to others in accordance to their obedience and usefulness. They won't need A.I., at that point in the game , it finally becomes a liability to them if others would be able to get their hands on it. Evil intent doesn't care how it remains in power, and all means to get there are just mere expendable tools to destroy once they become a liability... even A.I., computers, and people themselves.
youtube AI Governance 2025-09-04T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzA-WGjfSr91UFqrPV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx612rMbXpxAD4t3jl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXB8MMw5x9ZcvGTLV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw0P4LjmT12WDdu6Ex4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxlOVqUWDeSK6Y1Wj54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwBZApQCXk19_qNTm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwVA4X_dcx354It_FZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyy2QJmthnYIxIG_vV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwmL77DulpfVB-WNVl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKzy4T2JTpSBlcIBJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]