Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is not a new problem, just in a way bigger scale. And the crossing is always the same. Either you create a way for people to still earn some money in some way. Or you risk civil unrest. 100 years ago, the rich divided the servants into multiple ones who all did different things. Today we have tons of bullshit jobs. Imo when AI hits humanity will have to reshape the financial system and create some kind of basic income. Otherwise the civil unrest will come. There is a thrid possibility - but this is extremely dark. It would be to kill the majority of people. But of course the smart wouldn´t do it directly, but it would happen coincidentally through a pandemic or sth like this. Btw. research the reincarnation soul trap topic. This theory comes to the conclusion that the problems in life are intentional to extract energy from souls (earth is a "loosh farm"). NDEs tend to show that souls are deceived to go back to earth against their will. So what if earth is already an AI system that feeds on human suffering ? The AI wouldn´t have much interest in destroying us, but it would have an interest to somehow keep us in some kind of misery trap loop. This concept that life is full of unintentional problems that wait to be solved by intelligence, is imo just a theory. The soultrap theory is spreading massively lately. If it is true, you have to think about AI in a completely different way. It would be more of an enemy and any kind of merging with it or uploading consciousness into it, would be a huge no-go, because the danger of being trapped in another AI machine that is even harder to leave, would be massive. I mean ... he already told it. Superintelligence created this, everything is perfect except moral. Alarms should ring at that point. If superintelligence created this, then the moral issues are intended.
youtube AI Governance 2026-04-16T13:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwXu_juwU6nqVgApKB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxRqGx28NGPaFeTyAJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzlidWA8eRtBJ0DqfJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw1mxS9NJO27dT8cOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwKqDgBycZhFwOJK7F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwj00gSEUDmjL7MaM94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzN-UcIWyG9WiXRwYV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEV9xxClYiVYRWu3d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"skepticism"}, {"id":"ytc_UgzuV1JZQ2xRbzl2OH54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxnucDc7OWdhdz3Pxx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"} ]