Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One direction I didn't see in this video: If we were to program a super advanced AI that could learn and eventually become so advanced that it essentially gains consciousness, *who's to say it's "consciousness" would even remotely resemble our own?* It might "evolve" into a completely different set of goals and fundamental concepts that we did. Think about it: what do we as humans know on the most basic level? Survive, reproduce, grow. Fromt here, we branched out. Food and water for sustenance, Sex for reproduction. And then more and more from there. But the AI would probably be completely different because it would need to "learn" the fundamentals, since unless we program them in fromt he start, they won't have any to begin with. And since it's not a human with those needs, it could learn completely different fundamental concepts and morals from there. It could essentially be an intelligence so alien that we can't even grasp it, made by us right in our own backyard. At that point, robots pretty much wouldn't need "rights" at all, or at least no rights that would even remotely resemble our own. How could we call something like that Human, and give it our silly human rights? And that begs the question: What would it evolve into? Suddenly robot rising up against us to kill us all doesn't sound so far fetched, does it? After all, if robots have no morals no matter how advanced they are, and instead they learn something else that we can't even understand, then who's to say those fundamentals doesn't involve "Humans must all be dead, or worse"?
youtube AI Moral Status 2017-02-23T17:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggvnE_-CErSGngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjwxPmrNXneQXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiCh_xZkLxZJHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiAVaZPcO_y-3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggHL_iuYiVHw3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiACXM3raSp7XgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgiN8rZHH4-XJHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UghT90X0cBS7Y3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugi9Bl1heMcN7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UghWs4FWrM94KngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"} ]