Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the real problem is ya don't understand or control narrow ai any better I mean w…
ytc_UgxIRZPpJ…
G
The whole purpose of art is the feeling and the process that captures it. That's…
ytc_UgzTWe1Ff…
G
seeing ai art really pisses me off because the effort and time put into that shi…
ytc_UgzGV4Vdb…
G
Thing is it’s Not AI. It’s the people at the top using it in such way. The blame…
ytc_UgyulN58O…
G
a really good idea is to add ai distortion over speed paints so then ai artists …
ytc_UgyiYhAg1…
G
Ai uses Rag.. "Retrieval-Augmented Generation" on a huge indext of trained data …
ytc_Ugzx-j_6P…
G
Why do you think AI will have constant growth? The era of training models on rea…
rdc_n3lgjbt
G
Soon enough AI art is gonna become perfect, it's already developed so much in th…
ytr_UgyJ6J6cV…
Comment
ChatGPT was consistent throughout. You might accuse a human's refusal to pull the lever as a choice, but it would be an individual thing; some humans may deliberately not act because they 'don't want to get involved' and some might be psychologically incapable of doing anything (frozen). However, ChatGPT *must* follow it's programming since it is not sentient. Therefore, it's non-action is not a choice at all - even if it has serious consequences. The moral choice would be the human/s who put it in charge of switching the lever without adequate instructions for the task.
youtube
2025-10-27T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwkNdWV0_KsRXHwRS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwtYyY4jPpGiaI1HZB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx0Dvla7O7S-GH_RWJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgytyGxn7vRJ3OTcmfN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},{"id":"ytc_UgwWHtOHmb-vM77eOkF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-7zdwhGMDXwiS75x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwHEHoGwvUGhkP-gox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz_NSWmET6YAq3WVdd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzqC6Bx7NMRa45wgNJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugx56pf03FabIuxgtLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]