Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Never mind AI, will humans ever become conscious? I tend to believe it might all…
ytc_UgxWiBduz…
G
The issue is to most people, Art IS just Pretty pictures. And that is what AI do…
ytc_UgzQ7UrNw…
G
Do not freak. This is the step towards a star trek society. No one will have t…
ytr_Ugy_zBvZZ…
G
A MASSIVE STOCK MARKET CRASH LED BY AI which may well create a DEBT/BANKING CRIS…
ytc_UgxmsbZTr…
G
ITS ONLY STORRING INFORMATION AROUND PUT IN IF THEY SHUT DOWN FOR 100VYES THEY B…
ytc_UgxHbS6UI…
G
+TyillestTV2 It's not the AI that controles the driving. You're fundamentally mi…
ytr_UgiEk-5HP…
G
I think you underestimate the disruptive potential of AI. We're barely four year…
ytc_UgweZXFw6…
G
you say pro ai argument but they literally dont know that it takes experience an…
ytc_Ugx-X3avi…
Comment
Something I'm really curious about is the apparent lack of training materials that remain to be trained on. I've been seeing stuff that implies generative AI is capping out due to kinda having finished with all the training data available. If anybody has more info on this, I'm all ears! Not to say that we aren't already fk'd but I was surprised to hear about this new bottleneck, especially so soon!
youtube
AI Harm Incident
2024-07-28T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw2XvO6rMmv4w-H7qJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxW7e2gtGlcfRnWpY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCA3gqSs9UIm33uTh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxR7a1NzREWTM1Eb-l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1-NPGQfcgiUIuMMB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzT0g0qhlYU9a31w0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxiDCnIUs2E4PyKfdt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxNBJHPZtpmjiSTOWF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyf1WTXevcHn37mWn54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyxerqMpB83i4KbP0d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]