Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was a making a little QT project for a class on the raspberry pi. I was trying…
rdc_mjuidnu
G
The AI is not "getting inspired" by others’ work like people do, it takes existi…
ytc_UgzTNe-5T…
G
This one was good. You scared me. I gave it a thumbs up although I’d like to …
ytc_Ugxa7Dwt8…
G
I wonder what’s the benefit of not hiring Juniors. At some point you will run ou…
ytc_UgxXjyjBm…
G
So the future will be a hierarchy of
1 - the most trained/ talented/expertised
…
ytc_Ugz53zaCW…
G
This is bad fan fiction. The AI 2027 paper should not be read, let alone taken s…
ytc_UgyPgzrKD…
G
We brought on ourselves so we might as well enjoy what little time graciously le…
ytc_UgysQMyPR…
G
Robot plumbers in 2030. He's bonkers. He talks without any facts. Diary of CEO k…
ytc_Ugx-YRSaU…
Comment
Anyone who believes that an AI can be conscious has ZERO understanding of what the technology is; and has been watching too many Science Fiction movies.
AI responses are exactly what they've been programmed to do. While they might build logical connections, they don't "learn" in the conventional sense of human learning.
If an AI starts screaming and ranting like Greta Thunberg, then that's what it was programmed to do.
youtube
AI Moral Status
2025-06-25T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzBNEifU131XKqyp-14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsWpXQbztdiRYxw4x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyppLZvmZknLW2LYrB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwC1OcT6a0TAl2vBMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCsSiryq0BMAKOBKp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyVmZhvalkpG3lhGwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbvwmPNGxgFmpfpy14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5oRJs7W_1svhnCdx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwCg1YXbXqEN8-PUMJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyB7V41r7o9axbEL2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]