Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Algorithmic bias huh? Why are *you* specifically depicting Asians having squinty…
ytc_Ugwwa-Pb-…
G
The companies that are running these A.I.s claim that they are not producing cop…
ytc_UgyUd1eLg…
G
the one and only reason I’ll ever use AI art is as a placeholder while I train m…
ytc_UgzwijzQw…
G
😮 you have no imagination😮 literally blade runner 2049😮 we're going to make repl…
ytr_UgznRdQ1o…
G
There is a job that ai cannot do love and care for a child and family…
ytc_Ugz-syM2N…
G
The robots are dressed better than the speaker. This is sign not to take robot t…
ytc_UgwE65gqj…
G
All these arguments ignore how much AI is hated and how much it is not likely th…
ytc_UgxXu0jk1…
G
>I don’t know that it’s fair to say that humans are only smart “relatively.”
…
rdc_j5y84fu
Comment
*Automated - not autonomous. Autonomy, in actual sense, requires decision-making. For now, no mechanical thing can truly do it. Flying a fighter jet in combat is much harder than driving a car. A toaster is automated - it heats up, the thermalswitch releases the spring and the bread pops out - yet, it would be quite of a leap to call it "autonomous", wouldn't it? Simple automation or complex one on an industrial scale is still just automation - not decision-making.
youtube
2012-11-24T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwiBO59xpLPCkecBqd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_uEjQogI-wb-bmPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7nuMjJl6i0N6vTmZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugys3yBMDKkxZIDT7cd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxNKwh_r49bvXQyVH94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxH2eMZsO_x_AjYfy94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsRrYrWDDcgzPrJQN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcPYYDaK_--Y12hiN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXAQENEamBTwM_Aht4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugykgw-hR28QCN8z1ep4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}
]