Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was just…. Fascinurbing: Both fascinating and disturbing… The way you expos…
ytc_UgzddMQA4…
G
Seeing the ai art made me want to do a trust fall down the stairs…
ytc_Ugy0vldtz…
G
I’m so pissed that it takes a mega corporation to complain about the plagiarism …
ytc_UgzdW8fZQ…
G
His understanding of how code is generated by AI is not even close to being corr…
ytc_UgzM6E522…
G
Ai companys: "Yeah, Ai is dangerous if we rush it and are not carefull."
Also t…
ytc_UgwrCxgzI…
G
First of all chatgpt is a conversation tool with lot of trained dataset and can'…
ytc_UgzgnwqvY…
G
Hey there, Siddhii! It's great to have you here. If you're looking for a nicknam…
ytr_Ugyy-_DBz…
G
When makeup has become so normalized that people think this not looks human beca…
ytc_UgwHvi_WP…
Comment
...if it thinks like a duck... well, and this is where it becomes most uncomfortable, coz at some point there is no difference between you, and the AI. And if you add the Buddihist's conviction that the Self is but an illusion (see Sam Harris) and we are but a (very complicated) series of experiences and re-actions, then the border disappates altogether.
So the question might go quite the opposite direction, rather than "is AI any worse" to "are we any better"?
youtube
AI Moral Status
2023-08-21T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyw50kPMI4YgscOJ_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-nl07SxmJIyZI35t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh0PFTej62WD8lyw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgztYkyW4Uh0z_kLyNN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXT6e9HG6TfcPdEp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOGLxhwuP0Ig9o8nl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5aN6AmK1JMv55cat4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyLSSHsXlgN8nsniAN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9gDx75wKssNpdpw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyhU3zbx1Vch_hq8rd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"})