Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Classic ‘Grok AI reveals Göbekli Tepe secrets’ clickbait. Göbekli Tepe is an inc…
ytc_UgwoB5UqO…
G
Instagram recommended to me their AI chat bot thing. Most were celebrities or "M…
ytc_Ugzx2T59U…
G
When It comes to nukes there's only 2 certainties: 1) we haven't blown ourselves…
ytc_UgwUybGh6…
G
Try asking it to interpret a spec and write the code for that. OP is correct tha…
rdc_j8ckcvr
G
Your comment was removed for off-topic political discussion unrelated to ChatGPT…
rdc_oi2aa8s
G
First of all that was an emotional response to what appears to be fear. Perhaps…
ytc_UgyvZdaNw…
G
I've reach a point where if i see that some company is using ai to make theyr ad…
ytc_UgziJrKEa…
G
You are also not supposed to use FSD in the dark. The manual clearly states tha…
ytr_UgytfO17S…
Comment
No, you can have UBI and real people can still contribute to monitoring social institutions, contribute to research (“mankind is the best subject of man”), and pursue creative and purposeful endeavors (education, travel & leisure) — and until fully autonomous robots exist we can still work on public infrastructure. Post-humanism should allow for peaceful coexistence with AI and humans, it’s only the greed and power-hungry billionaires (someday soon trillionaires) and cluster owners who want to hold all the wealth and keep the remaining working population dumb, poor, and distracted
youtube
AI Moral Status
2025-08-12T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxpleylattzeD6_dP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGKem7pfuEc8Fj-y94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFwSifMTXKcaQgW2Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyvkABZ8g4_3IQ97eh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypWgVYP4f0H4rBmEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylHfIwlLb0fbIn5hF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugygq4BEkPfuaUnKOqx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxWKudZWjvr9NaQE-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyznEMuY_Z4M9M7vyB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJCWFtxQ8G9l2XAHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]