Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let’s be real, one doesn’t even need a proof to tell that Anna in the library im…
ytc_UgxPQ2hI4…
G
How can it ever be ethical to develop AI that has even just a chance of becoming…
ytc_UgyKPYmWm…
G
So, when there is a price barrier to accessing mental health care I can't in goo…
ytc_Ugy49P0s0…
G
L'IA continuera à progresser , la pause se fera dans le temps comme beaux nombre…
ytc_UgyOetLPe…
G
AI music unfortunately is just as bad. It wasn’t as prominent at the time this v…
ytc_UgwtWiqht…
G
No plumber or electrician is getting UBI or AI to physically help them. Time for…
ytc_UgxCZU-EJ…
G
Lol, so when an AI program uses information from publicly posted art to evoke ce…
ytc_UgwlLCYPI…
G
Digital art still has a human mind and hand behind it. Medium does not change th…
ytr_Ugxo0gF-4…
Comment
It’s not that we don’t have the words to efficiently/effectively talk about LLMs as though they aren’t sentient - it’s that a huge number of people who talk about LLMs have decided to call them “AI” and talk about them as though they’re sentient, which contaminates the way we think.
Also, this idea that the precise language we use doesn’t matter is total BS, because it’s causing uninformed people to believe that ChatGPT is presently sentient and all-knowing. And that makes them more likely to rely on it as such. And I think that’s bad.
youtube
AI Moral Status
2025-10-31T16:3…
♥ 455
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw2x0sErqnTEBCSJZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy-eDQc-LnP66KrhfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzByHsIC0Ly09nEiBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5tikRL4eR8Xsl6Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhJipb1hcM9z79LoV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9NPfWs1XgLcMeNm94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzInCW4859HZVBJ3bt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRmbdzCg0fy4umJTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxeRE8t-gKr81KpBE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7BPzdIpFM2_wq-ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]