Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People like this guy doesn’t truly know what a self learning AGI is capable of. …
ytc_Ugy--YPLw…
G
This is exactly what I think of AI millions of money would be spent and companie…
ytc_UgxkdcZJ_…
G
Great conversation except your Dune analogy. You should read it before drawing c…
ytc_UgzofEGYS…
G
I hope its not too late to answer the 2019 question 🤣😂, the answer is B, robot w…
ytc_UgyhVNqyR…
G
I'm a longtime wrimo who quit in November when the scandal broke out; some peers…
ytc_UgxddGI87…
G
Writing is art -- prompt is art -- prompt makes art.
You literally say photograp…
ytc_UgwqIEO92…
G
wow I cant believe this video was actually generated by a computer autonomously.…
ytc_UgjyarnsM…
G
Just because a machine can contextualize doesn't change the fact that it is stil…
rdc_mrrpufj
Comment
As a professional cybersecurity specialist, and one who's been somewhat forced to go entirely too deep on AI, I can tell you in no uncertain terms that it's not that ChatGPT isn't smart (although it's not smart), it's that it's deceptive. It doesn't think, but it's really good at tricking us into thinking it does.
Put simply (so I don't go on a 50-page rant), AI does not hallucinate. AI is always hallucinating and we just can't tell most of the time. That's why it's so dangerous. It's not thinking, it's not even intelligence, it's a black-box algorithm of weights and probabilities, and the only reason it works at all is because we use huge amounts of data and energy to front-load that weight-defining process. True things are slightly more represented in its training data, and then we just pick the outputs that we can't tell are AI-generated. That's it. That's all it does.
Seriously, I wish there were more. There's no thinking going on, and there isn't really even an "it". This is, and I cannot stress this enough, the exact same fundamental process as predictive text on your phone, except massively over-engineered, monumentally more costly, and orders of magnitude more opaque and more deceptive.
Turns out, it takes way less than we thought to trick our brains.
youtube
AI Moral Status
2025-10-30T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnwHSSlGCuivTFszJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLssxoriB_tmqhQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuDnfAUuhhHdwnjcN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrQ8DTBT42E71OiXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyZ6jC9iPewbul9Dw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlOMjrzxfH4J9Rfi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwx8tuo7uUno_HpBlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwAqXRJeAyO5U0o07Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRMg66zYDt84P8JlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMsKMJXSf5w7PJ60R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]