Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think creating artificial intelligence that becomes significantly powerful in front of a human brain and I'm not referring to one single action like just counting or reading multiple books at the same time. I'm referring to the construction of AI to replicate the human brain. I think in this matter society should be a strong participant in decisions, also governments around the world should anticipate what is about to come and not just cheer and support big tech just for the sake to say "is a cool thing to do". This kind of development from big tech should be stopped and regulated by policy. This is an obvious attempt to create something more powerful than a human being and all of us are watching with popcorn in a very stupid way thinking is for the good of science... how naive can we be? It's like somebody is experimenting to create zombies, and we are so curious to see what is going to happen or if that person will achieve the goal so we just let them continue. Senators, and House Reps, it's time to wake the f** up and stop these freaks. It's terribly scary to hear the guy involve in high-level conversations with other scientists and engineers that we should get consent from the AI in order to continue evolving more and more, bc according to him, that has a say. This guy is looney and people like him will be the idiots in the future fighting for robot rights instead of fighting for children or hunger. To stop this nonsense in the future simply do not create something that can challenge our own existence, its so simple. Period. Spread the word.
youtube AI Moral Status 2022-11-25T15:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyzMbtreyTa4lHR6ZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxCXoqL9iQaUMWOMTJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzn0naB8PpfJOeOKWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzyO0KpqkZeT-jGVp94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxoqMlgtWh_t-3tWQ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]