Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"a.k.a. learn from what it sees online and make/draw something original based on…
ytr_UgzlzdJYn…
G
or government can just put a flat ban on autonomous taxis and trucks for saving …
ytc_UggGmX4M8…
G
Character Ai is genuinely depressing. The people who use it are sad, vulnerable,…
ytc_Ugw0PJ3fU…
G
The range of facial expression is exceptional, so I'd love to see the servo and …
ytc_Ugi1DqdD5…
G
It’s anamation you can tell by seeing the feet it doesn’t contour with the groun…
ytc_UgzETgSzW…
G
Reduce personal cars down to singular essentials. They can come with self drivin…
ytc_UgzIri4-X…
G
Pretty much humans are smarter because they made the robot.They taught it how to…
ytc_Ugy_YbbXe…
G
@HaKson_boi33 Thanks for your thoughts — now let me offer mine.
You say AI can’…
ytr_UgxUWFqVa…
Comment
Philosopher here. All wrongdoing is a product of some combination of ignorance and weakness. Every agent means to do what it thinks is best, so if it does less than what is best, that is because it was mistaken about what was best, or it was not able to make itself do as it thought it should. So if we have highly competent agents as in able to discern truth from falsity, and they also have high degrees of self-control, that automatically gets us agents highly unlikely to do wrong.
A lot of this concern about AI alignment seems to stem from a presumption of moral anti-realism, that there isn’t actually anything that it is correct to intend regardless of whatever one does already intend, in the same way that most of us broadly accept that there are things it is correct to believe regardless of what we do already believe. So we worry about making sure agents’ intentions align with our own, rather than worrying about whether they are competent at assessing what it is correct to intend.
This is folly comparable to trying to make sure it believes the same things we do, rather than that it’s competent at discerning the truth. The “moral truth” is merely whatever is correct to intend, like ordinary descriptive truth is whatever is correct to believe. In both cases we need to foster competence at discerning those things, rather than just agreement with our preexisting conclusions about them.
youtube
AI Moral Status
2025-10-31T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdXf7QoFmDGGOyNfN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSjIu2Vl2S4XsDv854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxZukTmMl-JceLYTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9XpETftOZ7TaCXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwaW0zpxwYp_RN1up54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNHO1SiatOYKKW7IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTolRgYrK8D5WL3bN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYKo1CIjC9FJ_d8jR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyhnt8LvpTm4dkAqqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzpvr7yPMYvQ1Pjdyd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}
]