Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Such a valid fear... I like what AI is capable of but value my ability to think …
ytc_UgxwwOUsq…
G
College dropout here and absolute vibe coder and I wanted to tell my story here …
ytc_UgzO2L9EQ…
G
Brilliant interview... All the way through it I was thinking when the robots com…
ytc_UgymPQkFr…
G
Terrible take. If a job can actually be done with ai what’s the point of human l…
ytc_UgzdUqL7i…
G
How would you fix the conundrum of chasing the clicks? For all you know, the med…
rdc_fap6hy0
G
I suppose we'd all look terrifying, if our faces could be removed at will.
Othe…
ytc_UgzaOLMHn…
G
I mean yes, there's some folks with steadier hands, or a mental visualization ab…
ytc_UgyyBRoki…
G
11:21 ok to an extent i can understand this right, BUT this blatantly implies th…
ytc_UgzFJbv1G…
Comment
The first flaw in the series of logic is when Alex asked "do you *think* that if somebody says something they know to be false (...)"
ChatGPT isn't capable of "thought", any more than the predictive text on your old Nokia is capable of thought
Secondly, "did you just a moment ago say something you *knew* not to be true..."
It isn't capable of "knowing"
anything at all in the traditional sense. Does my CD copy of coldplay's "Parachutes" actually "know" what it contains? Of course not, because knowing something requires both cognition and memory recall, and ChatGPT has data storage recall but again no cognition
youtube
AI Moral Status
2025-03-12T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxZ_ueLYOUaaSLnLDd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFYW1sLUtdLvjCOjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugydd-iw7tXAP-kdt8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugx4B0MW9ZHbf7cWLll4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxRktD0CueUhw3WhMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwQOcMKeWV0bIu36Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugws1ehdaLN1lhygl1R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgzcNmjutdXkOOo8cm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy4YNwId_GlRbYKPV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyZ8X13BHsy0bhoy8Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}]