Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The worst thing is that I've seen the exact opposite preferences for ai(chatgpt)…
ytc_UgwOPhrc3…
G
Well they were complaining about work conditions so they bought a robot that doe…
ytc_UgzbHVX3b…
G
You can’t be incredibly kind and not care at the same time. It is either that th…
ytr_Ugxuy1qR8…
G
I checked it and basic GPT-3.5 model is incorrect, indeed. However, GPT-4 correc…
ytr_UgzDP5cMI…
G
Yessss. ChatGPT has been awesome as a tool to help flesh out worlds and stories …
rdc_j8dk59a
G
And what's crazy is that AI would be obsolete if we refuse to use it but we wont…
ytc_UgwFQ_hhw…
G
As for AI needing humans to oversee and it getting outsourced to cheap countries…
ytc_Ugx00BNZo…
G
I don't think it's AI, just some wierd video editing, short clips of jumping pla…
ytr_UgwVtvDCp…
Comment
I asked chatgpt to answer a question based on an abstract hypothetical. It just spewed out the usual, pro forma, woke, middle-of-the-road claptrap. As far as i can see it's a text-to-speech wikipedia, and nothing more.
youtube
AI Moral Status
2025-09-02T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzWzJjP737zbCD_gYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwxaQ_aoHdSB09uQZN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBj6sDWZTqKasxg1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwq-gSKt4oB9Am5nd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugws2AYYzYu-k5UgKBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzh0d5jRFyBLlSUW294AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwfi0y9nCJCf75bMgN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwFQiFtvYno0j1Gk2d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzuG-wCRYI5GsmJblZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqI36W9zK4Q6hl7fl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}]