Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Every time I see an English person trying to engage with AI thought it makes me …
ytc_Ugz6Abtoc…
G
The lady should retrain as a plumber. Suppose I have a water leak under my house…
ytc_Ugyiys1zt…
G
As someone in the datacenter industry, this shows me how much misinformation is …
ytc_Ugya4R-Iv…
G
38:03 in virtually every single AI safety debate I have watched, the notion that…
ytc_UgxC4_iAY…
G
I am in computer science and I can 100% tell you that AI will never replace a hu…
ytc_UgxoTVjXr…
G
Duh 😮, we don’t need any AI machines, so shut them off and stop working on them…
ytc_UgzgBXMqA…
G
AI would be running the whole world putting the plug would have to shutdown the …
ytc_UgxjESaWG…
G
FTA: "
How bad is it?
Native Messaging is a standard Chromium mechanism. Nothing…
rdc_ohpcl1k
Comment
One direction I didn't see in this video:
If we were to program a super advanced AI that could learn and eventually become so advanced that it essentially gains consciousness, *who's to say it's "consciousness" would even remotely resemble our own?* It might "evolve" into a completely different set of goals and fundamental concepts that we did.
Think about it: what do we as humans know on the most basic level? Survive, reproduce, grow. Fromt here, we branched out. Food and water for sustenance, Sex for reproduction. And then more and more from there. But the AI would probably be completely different because it would need to "learn" the fundamentals, since unless we program them in fromt he start, they won't have any to begin with. And since it's not a human with those needs, it could learn completely different fundamental concepts and morals from there. It could essentially be an intelligence so alien that we can't even grasp it, made by us right in our own backyard. At that point, robots pretty much wouldn't need "rights" at all, or at least no rights that would even remotely resemble our own. How could we call something like that Human, and give it our silly human rights?
And that begs the question: What would it evolve into? Suddenly robot rising up against us to kill us all doesn't sound so far fetched, does it? After all, if robots have no morals no matter how advanced they are, and instead they learn something else that we can't even understand, then who's to say those fundamentals doesn't involve "Humans must all be dead, or worse"?
youtube
AI Moral Status
2017-02-23T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggvnE_-CErSGngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjwxPmrNXneQXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiCh_xZkLxZJHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiAVaZPcO_y-3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggHL_iuYiVHw3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiACXM3raSp7XgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiN8rZHH4-XJHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UghT90X0cBS7Y3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugi9Bl1heMcN7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghWs4FWrM94KngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]