Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And then there are people saying that promting AI pictures makes them artists. O…
ytc_Ugw_kWAd5…
G
Anyone waiting with baited breath for AI to be a 'wonderful' part of our society…
ytc_UgyY6I6MO…
G
Omg the male robot, please tell me he was made to be kidding!cause he’s freaking…
ytc_UgzQ_n2iI…
G
proof that AI has ever “killed for the first time.” No verified news, official d…
ytc_Ugz7zLqZD…
G
I never really considered AI art art. Just trash actually. I don’t upload my art…
ytc_UgyPQILhj…
G
Why does your model have teeth in one frame but 'demon' teeth in another frame? …
ytc_UgxKQj84k…
G
The part about roleplay absorbing hate comments is interesting. On DarLink AI, I…
ytc_UgyeBbhej…
G
The first professor is entirely correct. The goal of an English class is to teac…
ytc_UgwqhFbLP…
Comment
@austinlong6936 MIRI is Yudkowsky’s think tank, so while I don’t know of Soares as well, I know the two are not, like, collaborators from different perspectives so I feel safe focusing on the more famous Yudkowsky.
Yudkowsky is a major influencer in the sort of culty Silicon Valley tech bro sci fi weird movement. He’s the guy who developed Rationalism, where he says he can teach you how to think in this super logical fashion. Rationalism and Yudkowsky’s blog/forum is a,si where we get a lot of the Effective Altruist stuff, too, since you can min max good.
According to Yudkowsky, who believes that a super intelligent AI is inevitable, a super AI would be most capable of analyzing good/evil, thus to maximize good means doing whatever you can to bring about this godlike super AI.
For example, rather than giving money to charity, it may be better to spend that money printing out Yudkowsky’s Harry Potter fanfic in order to introduce more people to his idea of Rationalism (which they can fortunately read half a million words about by reading through a series of writings of his called The Sequence). That’s because then there will be more people who can discover how to think Rationally and who will dedicate their time to maximizing good by developing that super AI.
youtube
AI Moral Status
2025-11-02T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyGDWc0iNMHE29Y-qp4AaABAg.AOwyq6rRRRAAQNPX-K-r0z","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwkGd_OWtC-51LYHUl4AaABAg.AOwxCjNfKfPAOxVrEsOyAl","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzKrVVcaRxCW5jxgoB4AaABAg.AOwvnaY-7qdAOy-6FotgBA","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugyw6fhE_puxgOT9Otd4AaABAg.AOwuITYQwS_AOwwsh6wtCz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyw6fhE_puxgOT9Otd4AaABAg.AOwuITYQwS_AOwyaDtuxuM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyw6fhE_puxgOT9Otd4AaABAg.AOwuITYQwS_AOx-U9KU2BJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyW6pJd9Hs3u6CotBV4AaABAg.AOws_FUM07hAOwv2bQl-dV","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyW6pJd9Hs3u6CotBV4AaABAg.AOws_FUM07hAOxPu9ceH9d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw97TiE8gqJdjSAQql4AaABAg.AOwruDBcQOBAP1AUK6AqgY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxRGLCxCRK_44Hs5PV4AaABAg.AOwqrIgoeI_AOx9PjIZzky","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]