Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT may soon learn to talk some sass like Tay did.
Although, let's hope we …
ytr_UgxoHLPOq…
G
No, AI needs federal regulations and state regulations. now is the time to call …
ytr_UgyoB-y3y…
G
An LLM is like a pimp. It will tell you what you want to hear regardless of corr…
ytc_UgwszHpxx…
G
They're putting taxpayers out of work, expecting their taxes to fund the develop…
ytc_UgzyEuOcw…
G
@nicotti yeah, you can express your mind with ai art, but its still trained fro…
ytr_UgwTyWaRd…
G
one day we will look back in horror at the amount of energy wasted to create ser…
ytc_UgyOeVr-m…
G
I think there is something to be said about the opener. That their version of a …
ytc_Ugzfuiz2X…
G
@smartistepicness sure, but AI art is at its worst state today, it can only get …
ytr_UgwXWss6l…
Comment
now THIS is fear mongering.
these models just do what they’re told, with limits set by whatever company made them. if it’s “evil,” blame the people behind it—either careless or just bad. anyone online can train an algorithm to do sketchy stuff. some wannabe genius will always try to scale up a half-baked code. ask yourself the real questions. ai wont “kill” anyone, it doesn’t even know what a person is. it’s just points and weights on words or pixels. you give it a photo, it’s all color codes and coordinates, not “seeing” you.
robots? physical ones run on three separate models: speech, image, simulation. each one’s blind to the bigger picture, only passing data for a task. none know what a human is. no intent, no malice—just programmed results. blame the ones who build and release half-finished machines, never the tech itself. tech’s a marvel, future stuff—misuse is on the creator.
early in their creations history, people thought 40mph on a train would kill you, freaked out over TVs rotting your brain—now both are everywhere. don't trust every headline or document you see. big companies are wrong all the time, especially online. read more, compare sources, don’t just parrot what’s popular.
I've been duped too—just because something sounds confident doesn’t make it true. a couple searches from different views can reveal a lot. information that is hidden because either side wants to make sure the opposition struggles.
for all we know, the digital environment created for these LLM's were designed in a way to incite certain outcomes. your cited articles do have some truth, but its being masked. there is no context behind these actions. what bias these LLM's were trained on. they nit-pick information that best suites the article they are writing, while ignoring or sidestepping what the cause of the problem is.
If these models are acting strange, whether it be trying to blackmail someone, or preserve itself via unethical means, Its because the company made it so. trained on human data, with little oversight on the content besides if it violates community guidelines. you would be surprised how much illegal activity is considered fine for an ai to read about without breaking its content guidelines. If AI is evil, its because we are.
youtube
AI Harm Incident
2025-09-09T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwruoiKjmQot8savxt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxinzTSGlCvuUqfTyt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzSOGaVxpUlizQN_h14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzpaQoRAbGwho51R-N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwox7vJyS1kaUKKNzF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmRB58Qcl7wkCSeKN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzoALFF4MbVrPcAYQR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV6FQVvLpMp0NtNVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxlQWaDTgrIyg4op2J4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzGtrub8QQOkPp4DMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]