Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
From the article
>The dog-shaped walking robot that the IDF is using in Gaza…
rdc_ku4vvyd
G
How can AI have read everything that humans have ever written? Is all of literat…
ytc_Ugxm283sK…
G
@olggbot yeah but when your drawing is literally about drawing a face and you tr…
ytr_UgzAUwsSS…
G
CEO should be the first replaced, what do they do that a computer cannot do bett…
ytc_UgyqbBt6_…
G
I always click thumps down for the right answer and thumps up for the wrong answ…
ytc_UgyrFWSGX…
G
As another artist, I also spend time and hard work that gradually turns into emo…
ytc_Ugx_Q6R_p…
G
What about when AGI is advanced enough to be comparable to humans? Like Im talk…
ytc_Ugw-EWJ7-…
G
Your term of AI vs human doesn't qualify because behind the information your hid…
ytc_Ugy16qffA…
Comment
I have a hot take: Even if these LLMs/AIs never progress one bit beyond where they are now, they are incredibly dangerous. We should NOT underestimate the destructive power of ubiquitous slop. This civilization? This...whole thing we take for granted? None of it works without some sort of buy-in to a shared account of reality , and if there is one thing that has been made painfully clear to me over the last few years, it is that humans are NOT optimized for truth, or facts, or reality. The "best, most true" ideas do NOT always rise to the top. What feels best to us rises to the top; the thing that MOST grabs our attention. It takes a bunch of boring unglamorous work, facing uncomfortable realities, and no shortage of humility to get where we are right now, and these LLMs have already given us the ability to completely destroy our information ecosystem, our sense-making institutions - all of it.
And these companies want to get to Superintelligence?
youtube
AI Moral Status
2025-11-15T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgymfilqznRZA1S9yyN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx_cVaBAgfLz2u04DV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6s4u5IWOZVvCskUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugxs2NY-BatQWlDvyYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw1-p6G-q2c0CXmBGp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgznX4wnrv0ujmSB4Jt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzRcOPv7YO7QLCFmPx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzHsPefPtWfUqua_9J4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1HoWp1JPZAc0gd-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylegS_BYQFy7PfK1Z4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]