Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@diegoaugusto1561why do you think that? If anything, I think we can expect the …
ytr_UgwBPs3_I…
G
Fear of death makes human do crazy sometimes illegal things...lets hope AI doesn…
ytc_Ugx3-0RxM…
G
I know the one who wrote the Manuscript for this to be possible he knows the sol…
ytc_UgwRHKxqa…
G
Yea so, you develop as an artist, you go to college, you practice for 10 hours a…
ytc_Ugxi5tpCS…
G
Don't ever give up. These people talking pro-AI shit are probably chatbots. Mo…
ytr_Ugw0KrLoN…
G
@ItsNikoHimSelf ChatGPT is not a reliable source as it will always give you an a…
ytr_UgxDtz2Ju…
G
having to explain that working digitally isn't anywhere near the same as generat…
ytc_UgyHrRQyM…
G
I have wanted a robot from the time I was old enough to read Asimov’s I Robot. …
ytc_UgyRIxS0S…
Comment
Not sure having AI ingest forums is a good idea. If you look at humanity as a whole, the internet as used by humanity is more as a tool, like search engines, shopping etc. The forums within lies many subcultures which are not really a good representation of fact. Does the AI even know what its ingesting? Like people's opinions? Which leads to the question, why feed AI only a subset of human's opinions? Why feed them at all?
youtube
AI Moral Status
2025-12-14T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw423sbNBGudEfDrAt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQNGXEZ3QcbYuBFlN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyY6pEi_PT6_Jc5jxJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3FM6NDchXLm6H6Gh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBWE0ProEolseBzXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwkwFMC7KHvXWYrgcp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7UoxDphSpBCwVCnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyUoj9pkF_OmSIm6YF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwoNY3LZCNghG5n6PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy6VdnRudy-RLQSt2V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]