Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the USA wants all the fossil fuel. They want it for AI data centres. We …
ytc_UgxCHl7tS…
G
This is all still decption. The u.s.a. holds on to fifty million anthrax bombs t…
ytc_Ugy8zUnE5…
G
As an artist (in a different field), I find it amusing when people hear or see s…
ytc_UgxSl8ElQ…
G
As an artist I agree with you. Most people don't understand how screwed my child…
ytc_UgylRkzi_…
G
Do you not know what the dot com bubble was?
At no point did I say these “AI’s…
rdc_lgnvck0
G
Actually, thinking & writing are an action, & predicting the next word in a sequ…
ytr_UgyLwejLJ…
G
I watched a video with someone using the ghibli ai mode so I commented "everytim…
ytc_UgwGEidbg…
G
Other countries are doing it, too, e.g., Deep Seek comes from China and Le Chat …
ytr_UgzD9dMhL…
Comment
I see lots of people making arguments on why LLMs won't lead to superhuman intelligence, and I think this is slightly misplaced. I remember the time before LLMs, when deep neural networks were all the rage and people were making arguments about how neural networks can't lead to superhuman intelligence. And the formulation of the AI alignment problem itself predate the neural networks boom too. So what I'm trying to say is when the AI alignment people speak about AI, they don't mean this specific architecture that's popular right now, but rather fundamentally any attempt of building AI of any architecture. So I think arguments about how ChatGPT won't ever become superintelligent are not enough to dismiss this all.
P.S. I do know that transformers is a NN architecture.
youtube
AI Moral Status
2025-11-01T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwqDZPwS0sJhzustSl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwfVIgjc9RUVbtK2Yx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBkZ0RB2dzvKO0Wc54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRx8kIRspv6bsRE4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDgzcIUZXZuAzgHSR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYdePcFg5OXhfaaV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8uz5IUjT5JCw33wF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3sz23nrUfIxdlCFJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnojViKzl0G8CMj794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyH4hSorqWq8zxU7AN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]