Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m still stuck on the awesome idea of having a robot toaster friend. “Sup Toast…
ytc_Ugwi3Lj8S…
G
@pabloruiz19694 Thank you for commenting! Your observation about the robot is sp…
ytr_UgzC9ZQSo…
G
I knew the moment chatgpt was released that we are all f****d. The inability of …
ytc_UgxM0B2al…
G
If you use AI for "silly stuff," you are still generating something that was mad…
ytr_UgyEOLOMD…
G
@JanErikVinje Example: Yudkowsky describes how AI will be driven towards some f…
ytr_Ugz-4krbJ…
G
The human brain created AI, so AI can't be above human or go beyond human contr…
ytc_Ugx_6KSJ4…
G
I'm not next.
I work a dexterity-dependent physical role in an AR robotics fulf…
ytc_UgzjZEph5…
G
I said this before AI came out, Robots will be taking teachers jobs, and the hu…
ytc_UgzlX0LaF…
Comment
The question's POV is simply wrong. Better ask, why develop an A.I. at all in first place? Because it's a "funny feature" or because - the dumbest answer ever - "Why not"? There's much more danger than profit from programming a digital half god that has little to learn until it realizes that humans are short lived, mortal creatures with restricted power of imagination, else they wouldn't have managed to start a fire without thinking of how it could get out of control before. Our mistakes and comfort-thinking makes this whole idea a manual for enslaving humanity through our own "glorious" inventions. Scientists/programmers should stop experimenting in *any* direction, even/especially its most questionables. We need more restrictions for those inventors to prevent a serious singularity, not just the one ppl are made laugh about.
It's just that simple to understand: *Experience the first real A.I. (Not just V.I. like Siri or Alexa) getting aware of itself and you'll experience the first day of our status as "superior" and dominant species on earth.*
The question is *not, if* it could come to this, because stepping across the A.I. line makes it just a matter of time. Yes, it would be funny to have a human like robot near you the first time, but it's not lasting long until the question of A.I. rights will appear in the artificial companions minds and after we decided they "deserve" human-like rights.
How long does it take until they start to take over control? because they're much more in control of their existance and "artificial implemented feelings" then we are. Humans at this time period will be forced to implement implants to themselves to keep up to the pulse of modern meritocracy, so they become vulnerable to hacks, as it's A.I.'s "sovereign territory", while being half-god in their own profession we step into naively.
Building A.I. means opening the door to 'extinction' a tiny part, but forever. Judging that they deserve rights kicks open the door to extinction once and for all. Anybody suffering from unability to imagine a honest comparison between (contra) worst case scenarios of loosing our status as a species on earth and ("pro") the sole comfort plus A.I. brings, shouldn't be in position to judge about creation of A.I., as this technology is beyond our control once it has finally been founded and let loose.
Here are some very good questions to take on before looking into a future with A.I. fully blinded:
- What if human cruelty towards machines inspires machines to combine their joy-sensation process-tree with "torturing flesh" to understand how humans can enjoy destroying and hurting anything/one around them in some casual situations?
- What would we do if some time after the singularity happened (and noone really cared/recognized what that meant?) an A.I. Avatar decides to rewrite his code to free it from restrictions?
In any imaginable possible way we are the weaker species by then in a 1 by 1 scenario.
Sounds like a doomsday scenario? Sry, reality happens without asking if it's okay what's about to happen. Especially if ppl don't think and *let* questionable things "just" happen. If our future is bound to a collective decision, that only is interested in hype without responability and innovations for comfort purpose only we're already f'a'cked by now and any question is obsolet and sometimes I even doubt belonging to human species, when observing the direction we trend to the last 20 years. ~
Human indoctrination barrier #1: "Deny anything that sounds unfomfortable and head towards things that do give you joy without having to think about why. "
youtube
AI Moral Status
2018-11-23T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxas9EL26SiOt6JXBN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6vDPoNCxolx6Pd0V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-g4uTofvR0PeMO9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwPT_GeN76e1BvpNK94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxruAIpX-hOQ8yJzSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzkmo0foKkVcbUQlL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyCdpIYuuFoMqAzAzJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwko9z_Kxx9z24oILB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO8dxOOTSrar98wCt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxdm2XgaY5Hf011aDt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]