Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"do you really love something if you didn't earn it or at least make it?"
Yep. …
ytr_UgzGsy3Ap…
G
ChatGPT powered scam calls and e-mails.
Good lord people, really buckle the fuc…
rdc_jfaal7s
G
As a Software Engineer, I will tell you this: AI doesn't think. It's just a bunc…
ytc_UgyzAkfq4…
G
Disagree on the just let-er-rip view of AI regulation. This is a good time to pa…
ytr_Ugw2IlJ7f…
G
Every big tech company does this cycle. Hire aggressively, announce "AI transfor…
rdc_oac3pd6
G
A really interesting perspective, but also perhaps a little disjointed and count…
ytc_Ugz3yt-_U…
G
We have no idea.
We built an entire world in the ether. When the first AI was c…
ytc_UgwDS2x3R…
G
I would rather NOT have to work a fucking 9-5 for 50 years of my fucking life. L…
ytc_UgwcuZbpQ…
Comment
“Corporation simply do not jokingly describe their products as humanity ending monsters”
Sam Altman is constantly talking about how AI could destroy the world because it materially benefits him.
It benefits Open AI to frame AI as an existential issue (either from it destroying us, or from the Chinese beating the US to build the first genuine AI), because once you do that there’s no limit on the money you can get people to throw at it, which they need since they’re blowing through billions of dollars per year.
youtube
AI Moral Status
2025-12-14T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgySp1S_NR9VaTmnUgl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2fWY9gbQqi1epg8t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwa2OhJaldG-hrxWip4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGnWZe-q0ir9r32P94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlFiCRci-GGFnlLMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6sgI0AwdZzl7TyYN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3sZRbMv8ZwukC_dF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwMOjzKcaYPhzcnCr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3aOaEvWZPWv3d09N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJCE5eBqkIfFOdRDN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]