Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not against AI, but I think there is still a lot of misunderstanding about the c…
ytc_Ugz7Ic8bN…
G
Sounds like ChatGPT is bringing back Darwinism for the digital age!
P.S. is the…
ytc_Ugwty3lpN…
G
We gotta wait for I,Robot style Sonny's to start drawing so we start getting per…
ytc_Ugz-IZjcG…
G
You wrote: "the challenge was made to articulate what is different about our wet…
ytr_UgxPWTX-N…
G
We dont even have an understanding of what consciousness is so how could we know…
ytc_UgztbTECI…
G
User error....my sorority sisters and I rode in one 35 minutes to the mall with …
ytc_UgxTxYvZm…
G
It does show a lot with that chat GPT "fix" - tho it's not just a modern problem…
ytc_UgyHc_GF5…
G
With all due respect, I think that there are real philosophical and analytic pro…
ytc_Ugy3b1L4S…
Comment
What worries me more is that some people actually expected true empathy/moral reasoning from a machine we built ourselves for the sole purpose of completing goals we can't do as quickly that have nothing to do with morality. AI is built with the intent to solve problems at all costs while evolving in order to find more efficient ways to do what it's tasked with doing. In other words, why are we surprised that what we built is doing what we built it to do?? Makes no sense to me.
youtube
AI Harm Incident
2025-09-10T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxqgrLu3m4pkffO0714AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmUwyeRNHEqNWgB2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQdtYS4g2XVsbvj8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgymwylxcP-wFXyU_JZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZWW-A58ufu-4OlcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwEjjlx1ymsRzxyAKd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxcemVaD6rwOD_vGaN4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwuXLa9uDnq0G6-aQV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7_mEA1CfdWGyLLpt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyjr8wkl5UvvN7d4LR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]