Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Every time I look for drawing references I filter out all AI because it makes me…
ytc_UgzE42n3y…
G
There is a setting to tell ChatGPT to not use your data for improving the models…
ytc_UgyOj5o0g…
G
Democrats are not “the left”. Both Democrats and Republicans are center-right wi…
ytr_Ugzx_iudi…
G
people are scared of this even though ai is so fucking bad at predicting and gen…
ytc_Ugw3yvl-v…
G
To be fair, even as an avid cat person, I believe Waymo has a right to reply, in…
ytr_Ugzog1Opc…
G
I mean, one time I asked ChatGPT to recommend 10 songs to listen to based on one…
ytc_Ugx7lyK-z…
G
Be actually did try to sue AI before when the MAGA crowd used it to generate an …
ytr_UgwqtXkea…
G
Let’s start with news anchors AKA talking Heads. AI take should take over those …
ytc_Ugz3_ylOm…
Comment
I believe he's exaggerating to generate headlines. Current large language models (LLMs) lack self-awareness and don't have any concept of being "shut down"—they simply process input based on training. Furthermore, how exactly are they addressing this supposed issue? You can't just "code out" a problem like this. As he himself admitted, neural networks function as black boxes—we don't fully understand their internal workings. You can't surgically remove a memory from an AI any more than you can from a human brain, because the learned representations are deeply interconnected. So what’s the solution—retrain the entire model repeatedly with slightly tweaked data? That sounds questionable, especially to someone like me who works in AI research and development. And honestly, his body language when discussing the so-called "accident" seemed very uneasy.
youtube
AI Moral Status
2025-06-04T14:1…
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzXcbQNkiz1GShRr3F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywnyGSYCS3aGe8U6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeItcd7yBtyM1Y1yx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8oYeog8RPOGIaWcl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBRbXb5QJBt94-OzF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlCxYqFL4SXT_Hyvh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqDEh6R7VwBY8yIvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRnPC5llHjWfVdgSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxyCXN6L85LkUqUA4h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz_jQzq2Hsc4FyqE3l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]