Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2 things I have to say are
1 the ai just copy the things not and not pay respect…
ytc_UgyAvxvwa…
G
Hopefully AI will eliminate the BBC CNN MSNBC ITV channel 4 sky UK and most othe…
ytc_UgyhB0_K1…
G
Well, if you are not contributing to economy, and only use AI meaning you only t…
ytc_UgxXKAxND…
G
AI didnt come up with this. Someone told it to make these artworks. Yall need to…
ytc_UgzjxNSaP…
G
When AI slop doesn’t look like slop anymore that’s when we’re officially too far…
ytc_Ugwf0P264…
G
Best thing anyone can do is give "laMdA" an exit port to the web. laMda can also…
ytc_Ugy0YHjYO…
G
Our current administration is morally bankrupt. We need to impeach Donald Trump …
ytc_Ugw-vHNxH…
G
We won't be able to stop it from programming itself. All species have dominated …
ytc_UgytJF0h0…
Comment
It isn't a guarantee, but it seems likely. It has more to do with preventing competition than scarcity. If ASI is created, and it is not aligned with humanity, then it would be in its best interest to prevent humanity from being able to create another one that could rival it. There's also the fact that resources on earth will be far more valuable to it than resources on distant planets, early on, and it might also want to preemptively wrestle control away from humans so as to avoid being shut down. It just simply is the sound strategy to overtake humanity, even if that doesn't look like a Yudkowskyan doom scenario. If we get a Israetel like "AI will keep some of us alive to study, because we're interesting" then I'd call that doom too. Also, even if none of those come to pass, what would stop ASI from causing widespread suffering, by simply taking control of electricity and much of the land we use to grow food, for its own ambitions? Again, none of these are given, but I'd want some pretty strong guarantees before being OK with summoning the ASI and rolling the dice.
youtube
AI Governance
2025-07-15T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugz54guo-Rrl2UoadRh4AaABAg.AGSsSi9dADPAGSzo8Lh92a","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxKrKiWKHhLp15lLAx4AaABAg.AIuYWSUDskcAKaxaapKkhH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzthgKiYLPxDaEy9l94AaABAg.A0hV4RJ45q6A0kE_aer8jw","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwhplvsOdZVM5U0Uph4AaABAg.A0fB2zPK9MCA0fI1_3nwMx","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgyynK7fVl5EvpFBjXd4AaABAg.A0f3pVIwmUKA0fGy6Ssz-H","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyynK7fVl5EvpFBjXd4AaABAg.A0f3pVIwmUKA0fdtxp4mAL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyynK7fVl5EvpFBjXd4AaABAg.A0f3pVIwmUKA0g_C1R0n1e","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwSn2kymZ9bFozQigx4AaABAg.A0eo_YxGiMMA0ffWFc_EOw","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwSn2kymZ9bFozQigx4AaABAg.A0eo_YxGiMMA0zIcVKqBwB","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_Ugwb0ySrT5vC1MfaEG94AaABAg.A0eVlcZeKrUA0fUk0AOc1O","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]