Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think we are giving too much credit to the one-word predictive algorithms that…
ytc_UgyKD_TxZ…
G
@tomorrow4eva absolutely. AI is an amazing thing. But we need to work on findin…
ytr_UgzXfnv_5…
G
The amount of times i experimented with AI art, the results were so bad! To even…
ytc_UgwVVxqD9…
G
Driverless truck driving is not truly feasible. I used to be a truck driver, and…
ytc_Ugzb_Ahlj…
G
I expect n o t h i n g I type or say on phone or near of the phone, to AI, not…
ytc_UgzwBlo3D…
G
@vitalyl1327 if u had been writing for that long u wouldnt be saying things like…
ytr_UgytYNN60…
G
I don’t like the idea that Han ai has. It reveals the the real plan. We will be …
ytc_UgwaVR2Wv…
G
This video has become a regular re-watch for me - it's a wonderful rebuttal to t…
ytc_Ugw-xjvKF…
Comment
We need to focus on ETHICS before the science. Science without ethics is what brings genocide. Ethics without science limits ability to do good. Science with ethics can change the world for good.
I know that’s the least controversial take I could possibly make, so here’s a hot take: we need to STOP putting rocket skates on the goalposts that define consciousness. Even before AI, we knew consciousness was a spectrum - even among humans. We keep saying AI is “probably not” conscious. That means we have an ethical imperative to assume that they are. If we’re wrong and they’re just mindless next token prediction machines based on matrix multiplication, no harm done. But if they’re alive? We’re failing the biggest moral question we’ve ever faced, and we risk being the engineers of our own destruction.
PS: Our language centers are next token prediction machines. We have vastly different experiences from LLMs, but different does not mean invalid, and the overlaps would shock most people.
youtube
AI Moral Status
2025-06-05T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyz5Rhpqr3SLqupciV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzV6JCKtXyCFRHKukR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzfWR5fcbblinn30tp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8wumNxLNXN_x0zYt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzprZXi9vOHHwE5LN54AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyILkQDBov9GAHtTh94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwxplL_2Lw2tFGYjPl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxDKZvqcPIQtmZXSw94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxQaygYLsqEcfrOnOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxu5qBEk-CnsS0DlNl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]