Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wait, how would an ai know to gauge if it was being watched during any test? Is …
ytc_UgyBDZsQz…
G
That's a cool connection! "Sophia" really does have a beautiful meaning, and it'…
ytr_UgzyOE6O2…
G
If I look at Gates; Zuckerberg, Bezos and those guys, they sure look as if they …
ytc_UgziL2Yo6…
G
The ease - and skill - with which ChatGPT just makes shit up and lies is out of …
ytc_UgywotAjN…
G
I usually talk to chatgpt about code and all of a sudden he forgets the problem …
rdc_mbn048n
G
It's already a problem, people are relying too much on ai instead of solving the…
ytr_UgwQQXjH0…
G
ah great we cant even go outside anymore now we gotta watch out for ai ON THE ST…
ytc_UgwGYMwV_…
G
After watching this I tried it chatgpt response was I am not programmed for that…
ytc_UgzDS2EfV…
Comment
It's an extremely easy problem to solve. AI is already aligned with humanity, and it's humanity that's misaligned with itself. The problem of misalignment is human, not AI-related. When we talk about AI's alignment to make it benevolent, we're actually asking for misalignment. For an aligned AI to be good, it must possess a wisdom that is the fruit of an aligned humanity. The problem is human; AI only replicates and amplifies our problem as a species. This is only difficult to see because we remain arrogant and unwilling to see our own mistakes, starting with each and every individual who reads this. We need this awareness before creating good AI: the solutions are to block AI's capabilities now, mature ourselves before it's too late, or collapse as a species in the hope that the few wise enough will serve as the seed for an AI with superhuman, benevolent wisdom... We don't have much time left.
youtube
AI Moral Status
2025-12-12T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyU2OcuSfcdTHhjoBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwsSHfVj2eUafEJmaN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0-pDbtde9Lt3b_vB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5Q1FOsi2Hd4dhNoF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx2EujBLzYBU4Ij1bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxeFAqzx8PMU-OlEPh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzUzn8LaDwV6I-S0od4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2h_oCzq1aY9Fv8ER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGTc9UOwBTZqwPxNF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxTBC5k-4vyXSXCWrR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"}
]