Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Those who used AI will be the ones who retroactively pay for the losses of the c…
ytc_Ugx-3udv7…
G
I'm not here to defend it, but I will say what I use AI creation prompting for (…
ytc_UgyjGTk2t…
G
What makes you say that? CT and MRI are extremely complex, I think AI can help p…
ytr_UgwFxGOD0…
G
"Hi, I'm designing a creepy robot that will take your job, even your art job, so…
ytc_UgiPXickZ…
G
I use to write AI. Blackhat programming is required for what you say. AI is a …
ytr_Ugxnh_0Ob…
G
There are people on the frontier of AI technology that believe that AI is the su…
rdc_ohzpg7w
G
I'm actually surprised at how well-spoken and intelligent this man is. I was exp…
ytc_UgyIHf7i4…
G
Fun story for you, buddy. Companies have invested in AI because of the promises …
ytr_UgxCxbe5C…
Comment
As a moral realist, I absolutely disagree with ChatGPT : there is a perfect answer. It's just not necessarily as obvious to determine as one may think. From an advanced consequentialist point of view (the only serious approach to moral realism, as a theoretical modelization of ethics, not as a practical guide to decision making), it's not just about counting the deaths, but estimating the impact of every consequences : for instance the mental impact on you of the choice you make must also be taken into account (trauma, empathy lowering that could lead to more harm, etc). And there are many other as soon as you add variables (is one of the men younger, is one of them known by you as evil, etc..).. Ethics is absolute but out of our reach as a practical guidance.. so we rely on empathy (care ethics) and ethical beliefs to decide ;). In the end, ChatGPT's advice is good, even though likely not factual :)
youtube
2026-01-03T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzh7LvF_7JHf7xojEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzr1rhc330-HfLLMU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKAw_Y_sZrW0x_UXB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvVjxe00lWXHUdUWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwo5UBmfJmh5QdscjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytZSLt1EfCzPuiqKl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz8mpNS_q9tVTE5IQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLUqpMofCRzx4f5I94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVf_chJx58VkZO3bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZ9Qi9rUPjo6zExAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]