Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine watch an e-sport event
But there's only bots "playing" against each othe…
ytr_UgwVIgwBH…
G
For AI.i screamed multiple times on X at Elon Musk that they must not give them …
ytc_UgzgP4j0N…
G
Why would the entire workforce not take to the streets and burn down big tech cr…
ytc_UgwwLGBPn…
G
What country today is using ai for tracking and killing people? The US. If you…
ytc_Ugzl1Uwpu…
G
“To avoid any Negative CONSEQUENCES in the future” sounds like every other Villa…
ytc_UgwGxjx-P…
G
Chinese already have a strong presence in Africa. Their only overseas naval base…
rdc_et77m9q
G
@dangdudedan8756No, their mom is right. Art is used as an expression of human e…
ytr_Ugx62oDdm…
G
People simply dont understand that AI cant make decision, they see AI generated …
rdc_n7mcvfv
Comment
Also finally everyone ought remember his title refers to something structurally impossible: it is a static model that you cannot convince any more than Eliza chat.
So it is in a sense, only possibly motivated by a desire to "stir the pot" and elicit emotional outrage, while he is fully aware that ChatGPT, as a static model, cannot be "convinced" of anything.
Isn't that true, Alex?
Doesn't that make your content more harmful than whatever ChatGPT fails (or doesn't; it's not materially relevant here) to admit, since you know a static model cannot be "convinced" of anything, Alex?
youtube
AI Moral Status
2024-08-25T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyGUbR9NKdiYqpAAGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-1AqHPQsJRhBXrVd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxd-wUTBLI8SLlqSyZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzTzPPvLmoGAvfCVS94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzNVE4dn5G4aBtUDqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzsBakThjnjTZ_-5yh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFwbiI9XpBZWoRM1d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz0xziGoI77ayujJCt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9t9p8tiZtbV66xYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyywR_yQ-IZXVVY_QR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]