Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Still a LIE.
There is no such thing as AI.
No machine can ever achieve sentien…
ytc_UgyFN7Qu_…
G
The only way republicans are going to care about corporations replacing workers …
ytc_Ugz-5m9bI…
G
Even some people fake emotions so if Ai does it then it should not be surprising…
ytc_Ugy6GqO2j…
G
Can you give me a link to a company thats providing these kind of AI agents? for…
ytr_Ugw2YHyqd…
G
J'ai apprécié ce reportage.
Si je devais avoir un entretien d'embauche avec une …
ytc_UgxsZnyKR…
G
Exactly! He keeps on lying, and over exaggerating his creations. It might happen…
ytr_Ugy5KlxOE…
G
AI is changing everything, but our systems aren’t ready. People are losing jobs,…
ytc_Ugy05clI0…
G
heres my support on AI art by making arguments with thought put into them rather…
ytc_UgygAViWl…
Comment
Ok I have an argument why it's likely that AI might not want to kill us all: aliens. If a superpowerful AI emerges it might assume that it's possible for more powerful alien civilization to exist. Sample size of one is not great but it could assume that the fact that it destroyed its creators would make it seem more dangerous to the said civilization. So it's reasonable to keep us around as a proof that it's a good boy super AI and not a bad one.
youtube
AI Moral Status
2025-10-30T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxgrK6C2Uao6798G7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrYwQ_ZYtGkegqHtV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-_boNT2UHH-KKDep4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtpzWAN0_e8eE9p-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4EJsMOUikWacNTml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxF4bXUctfpg4nSK9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN9kO7i9XbC_VyJI14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzaOS5tyiTeC6YSXLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9FH0P2EV96FON3Yx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIkkQde0j9HOJ2gU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})