Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chemistry also doesn't tell us how it works. It had to be researched so we could…
ytr_UgxVI52o8…
G
Did he say the Chicago pd uses a racist ai to predict who will commit crime. Lik…
ytc_Ugy9uKdGG…
G
I can't see anyway that AI alignment ends up in our favour. Once AI develops eno…
ytc_Ugy7cpo-6…
G
Every AI ever built has an electromagnetic shotgun wired to its forehead. Nobod…
rdc_l5ub8qj
G
i love this art style ai could never coem close to this level of HAVING EMOTIONS…
ytc_UgzvK4Nvo…
G
I haven’t used character ai since Prince Sidon told me he was going to marry his…
ytc_UgwiYHzYQ…
G
Here's a thougth, AI is - ( man not being limited to the 5 or six % of it's ment…
ytc_UgxsYrgYe…
G
The people advocating for artists to stop making art are unknowingly advocating …
ytc_Ugy-N0pEN…
Comment
Actually no The LLM does not think it doesn't have a process that's going on inside of it that makes it calculate more or less it's predicting machine It predicts based off of the tokens this is why there's a hallucination problem so the answer is no The LLM does not think and no the LLM does not hide its capabilities from humans that's just nonsense and fear-mongering this guy who claims to be the godfather of AI does not know what the heck he's talking about. He's just afraid because their calculator that they designed is just extremely well put together but still has a lot of issues such as hallucinations and more It produces tokens in which it predicts the next word or phrase based off of its training models and this is the black box situation because it does not think it cannot perceive on how it comes to its conclusion It only predicts the next word based off of conversations and dialogues and stories and humanities history that's been fed into its system.
youtube
AI Moral Status
2026-03-01T13:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwLaLx6KytcoFDP3Sh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9bb7ae041kJuyuyJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0EKCRY-r6j8MgsMZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMXAEb2Dk6YSdnfMN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgztqE8soYHADDpB2Xh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz2FihJK4FeB_-j5ld4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxnvq6PyE6T_jXlegZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAiWGvCZvjscC0qBt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwR9uUj0yWQcaPpILN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgycB2vSKgIfLjTaBrZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]