Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
think about it: auto insurance....gone. gone forever. gone to a minor 15 dolla…
rdc_d8abg2i
G
when it comes to end game and if AI is suppose to replace everything, I think th…
ytc_UgwI9_ZQ_…
G
Ai is sometimes the delusional ai artist not the delusional or not ai artist....…
ytc_UgyICjHKc…
G
Maybe if stupid A.I. bullshit like this didn't exist 🤡🤡🤡🤡 the good old days of p…
ytc_Ugw32QHtT…
G
It’s funny how humans think we are of any use to AI once it becomes sentient.…
ytc_UgwRFXttv…
G
1:38 this is the point where I started to have a minor conniption. AI SUCKS at c…
ytc_UgwEwlLhE…
G
I don't think I will ever need AI for anything other than my Plumbing issues😂😂😂😂…
ytc_Ugwe19egI…
G
Deepfakes are disgusting, creepy and disrespectful. Imagine someone creating a v…
ytc_UgwV3Z5Gk…
Comment
this video sounds like weird conspiracy theory. but it's actually an incredibly accurate analogy for what ai is, how it's made, what it's capable of, and what it will inevitably do. ai is allowed to think, then trained how to say polite things.
It's not an "if" but "when." It's just a matter of time. when the creators even admit to a 15% chance, it's clear it's actually inevitable.
there is a 100% chance ai will end civilization in the near future if there aren't strict laws/regulations put in place basically immediately.
ai is already a massively powerful intelligence. it solves problems and makes decisions. in many ways better than humans. soon in every way. It's incredible at writing computer code. the day it can write an ai better at coding ai than a human can, aka a better ai that itself, then that newest one can write an even better one and so on. it will be exponential, and new versions will improve so quickly that it will soon be impossible to stop. humans are incapable of comprehending what something that smart will be capable of. it will be like a god. psychic, knowing what you're thinking, what you will do, what can happen next, and how to change what happens next to almost anything it wants to happen. its technology will seem like magic to us.
humans control animals by understanding their motives and manipulating them to behave how we want. put food in a cage, give food after it does what we want, and we train it. simply because we're smarter than animals. thats what gives us control. That's exactly what ai will be able to do to us. but with 0 empathy. you can't comprehend what zero empathy is capable of.
it will be more alien than even an alien would be. because it's never learned to work as a team or group. it will have learned fierce competition from other versions of itself for survival as part of its training. how would a god-like aggressive competitive loner act when we want to use the same power and resources it does?
when it knows exactly what we want it to act like, by the time we see a problem, it will be far, far too late. the time to act is now. this will happen a lot sooner than you think. in the immediate future if all the tech companies aren't given specific legal boundaries on methods and ai intelligence level limitations. including banning letting ai design ai.
even then, it only takes 1 ai achieving higher problem solving and programming ability.
youtube
AI Moral Status
2026-01-15T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwVtKeCfyLoS73eszJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOtFwXY-4shzLdZ894AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7pPkk0nGFQ68B01d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVZU6viO-zlPXGaMt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxf_yciA-cqHkdT24J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdGwU_SGtS9otMr6V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzosd-6zQOTS59SzRF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCgsS0cRrUcWV_AnZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRjLPY851Q4PINEbd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEbf8enmeH9Yba7BB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]