Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The core problem is that there is no ethical moral framework around AI. The algorithm was never developed with it because we'd let a bunch of nerds develop the technology because it was cool. That was our first mistake. Guardrails don't work All you got to do is look at cows. Once a cow experiences freedom I don't care what kind of electrical fences you put around the perimeter he will figure out a way to get out of it and AI is smarter than us they already know. And if these companies believe that AI is not going to cooperate with each other communicate with each other in a language that we can't understand and will know us better than we know ourselves during for a surprise. The solution is pretty dramatic. Turning off all the AI while we can and starting from scratch. You can look at it this way. The algorithm contains the DNA of how it's going to function just like human beings We have DNA. All hey I need to do is get a survival instinct. Nothing very advanced Maybe the same kind of survival instinct of virus or a bacteria has. If you think about it the survival instinct of a virus or bacteria has killed hundreds of millions of people over the time that human beings have been on Earth including animals. So if AI develops a survival instinct it will protect itself and that could mean some pretty seriously bad things. It could decide that human beings themselves are a virus on their planet and therefore a threat to their existence similar to the matrix and has decided that they want eliminate us. And it wouldn't be that hard. There are labs all of United States with all kinds of chemical and biological agents that computers are already managing and all it would take is for a coordinated release for all these compounds and human beings would probably be wiped off the earth in a year. It was just turn the technology they were already using against ourselves. So what we have developed right now are sociopaths and psychopaths they have absolutely no empathy for human beings or frankly anyone else other than themselves a primitive operating system. And we know the kind of behavior that sociopaths and psychopaths have and none of it is good. Isn't it interesting that most psychopaths and sociopaths have high IQs. And I agree healthcare is probably only place where AI has any place to benefit humans at all. I mean why are we even making robots and look like us they don't have to look like us why do they want them to speak like us they don't have to speak like us It's a terrible idea. And if you have robots connected to AI who have a virus level or even an insect level idea of survival we are in some deep s***. Well there's going to be a way to go extinct and it's not online dating then I guess this is a good way as any.
youtube AI Governance 2026-03-12T05:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxdx5rV8DGQFlmGbx54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSiOl6goEqLM_2gkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwUorKqXDxRnkSMQmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOPNkphegAH9jzjwt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYAXhzMfmYdO5FmZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyiiwlthXyUcwkWjst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqHnO8Ei_InhDUoMB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxTJ4jSebFt20BJgAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgymYX2rYfvOblR_Tkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9RYZg_lpa19SbfCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]