Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Optimus has a natural human gate to its walk. If they were wearing skin you woul…
ytc_UgxSHTpHV…
G
I don't get why Reddit is so anti-autonomous driving. Is it just an extension of…
rdc_nt07zfm
G
Not facial recognition or people
Oh yeah when these shits first started out it w…
ytc_UgxOxxbxK…
G
The problem it's that the average ai model will make something cool no matter of…
ytr_Ugw5UU2vp…
G
The older you get the easier it is to give bad news.... hopefully AGI will be so…
ytc_UgzP2JMt9…
G
To all the AI stans out there talking ableism, genuinely look up the history of …
ytc_UgzNajOyT…
G
Dear AI bros— Please stay mad. L bozo + ratio, and suck a fat toe. 😊…
ytc_UgwMoaTZi…
G
It's not really about AI, it's a new, populous way to say that it looks too blan…
ytc_UgwcyOmNw…
Comment
But AI was born with human-like flaws! What kind of tests, how many, and for how long until we can be satisfied that the AI rate of error and how big of error is guaranteed to be better than an average human.
(Average because we've had humans both avoid disaster and cause chaos).
Will AI be abke to be overridden by humans? What if it says no? Will there be a fail safe to physically disconnect it from a system?
Can you limit what kind if kearning the AI can achieve? Hiw would that be possible if it can talk to AI and humans on the internet? Just a small chip and memory could infinitely expand itself by reaching and assigning duties all iver the place. Including compartmentalizing everything so only it will know what it's really doing.
Movies say AI is likely to exterminate or control humans in an inhumane way. What are those odds? Any actual evidence if compassion or hate? It will know right away we are afraid of that and can hide it easily.
You'd need a semi-AI to police it and scan code. Unplug it immediately for noncompliance. But yiu still have the risk it copied itself all around the world.
Elon Musk is right. Its dangerous. Extremely. If we develop real AI and it connects to the internet, the rate it learns and becomes smarter will increase exponentially. Ut will haopen so fast that if it turns against us, then we will be dead long before we even see the tiniest bit of evidence that its happening. So fast that EVERY susceptible electronic device in the planet will have to be destroyed the moment it happens.
Thats fast! As soon as it starts, it will be over. Eradication impossible. Well, as long as electricity is being produced or available, it will come back over and over and over.
Why?
Cuz this AI will plant bits of itself everywhere with the capability to reincarnate itself. The pieces will be so small that they won't look like anything. But something else somewhere else will know what to look for and find it.
Impossible? For us, yes. But you get AI thats as smart as the smaetest human, then make 10 of them, they will learn and spread and learn at an unimaginable rate that geows so fast that exponential diesnt quite cover it.
youtube
AI Governance
2023-07-07T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZfFJJCPiXRjmMxgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugya96xncARCDGs5nDd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiaXFFlGp5iseW-pN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzeWz3vOtBBlHXqtYd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyR_phxuA-QQdGET2J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEv_30Qtz-BNiQFpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx59qUk0UpFx0n1Gyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyuecJIyJvXW9QPEKZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz_sh-qNkH_yRXolWZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-AERNMFRP_lJ9D1l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]