Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Generative AI plagiarizing studio Glibly art-style becomes especially egregious,…
ytc_UgxHaJfXy…
G
Why would anyone want a Tesla in this day and age? I don't want a single dollar …
ytc_Ugxo9JXXq…
G
Its robot becuase when she shows the first one.not the walking one. It has a a s…
ytc_Ugz3NTwR6…
G
I feel like AI and robots should have been created to take the tasks off our sho…
ytc_UgzglSfGw…
G
I believe it depends on the purpose of which we build an AI with. Do we want to …
ytc_UgyUAinoX…
G
I think the bottom line is that AI is not a tool that should be available to any…
ytc_UgzmpHaW2…
G
In my view, the ends justify the means. Artists are such a bottleneck to busines…
ytc_Ugy0o714-…
G
this is what happens when men have a conversation about the future, given the ex…
ytc_Ugy3aDaIr…
Comment
When you went into the reasons for AI hallucination ('reproducing humans never replying 'I don't know'), it reminded me of one idea I can't get out of my mind. Since LLM are trained on human text it is reproducing systemic human behavior (dunning kruger, backfire effect, yada yada), so in a sense humanity is performing a mirror test on itself.
youtube
AI Moral Status
2025-10-31T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGCenfic0DffQynGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMwUrLPPKGZc7N7gZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxYTqk0c1AMEO-Cn0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugzvezki_UIzKiot7-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwqT_qp2eypDr9Kwf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYZAS6C1uYHlECl894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJjyR6omrJ_AWUSwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTHp--dd6C17hBoY14AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyv3k5O2BLJBDFPWJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1YYOzpzTlkFa9XrV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]