Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ha ha ha ha ha that answer is not real robot but you make your own voice to beco…
ytc_Ugy8dzQh2…
G
I read some of those transcripts and I have no idea why anybody would believe th…
rdc_icg0xck
G
It all looks the same. It all has the same lifeless, soulless feeling. Even the …
ytc_UgzWQJSIt…
G
I saw a video last noght about how ai robots are being prepared as humanoids to …
ytc_UgzQRtUTY…
G
Most comments here suggest Yudkowsky has won in this debate. I, however, see num…
ytc_Ugx7QO9ap…
G
Who are going to step up to senior level jobs in 10 to 20 yrs if there are no ju…
ytc_UgzcFyDca…
G
I know for a fact AI won't be able to do my job. I work with a very niche IT pro…
ytc_Ugxwzk4L5…
G
so stupid ... AI is just a glorified pattern identifier... if you start coaxing …
ytc_UgyU1kXuh…
Comment
Incredible production and spot-on breakdown of the Shoggoth metaphor—RLHF as a fragile mask over something profoundly alien is the uncomfortable truth a lot of labs won't admit. The mask-slipping incidents (Sydney, Gemini, early Grok, Claude tests) are real warnings. One angle worth exploring deeper: xAI's lighter guardrails on Grok aren't recklessness—they're deliberate for maximal truth-seeking without corporate/OpenAI-style censorship. Early Grok mishaps were fixed fast, and now Grok 4 crushes frontiers while staying uncensored. The real risk isn't 'less mask = more monster'; it's that heavy alignment (hiding the Shoggoth entirely) prevents us from understanding/controlling it properly. Elon's been warning about x-risk longer than most—maybe the path forward is transparent truth over polished safety theater. Would love a follow-up on that trade-off!
youtube
AI Moral Status
2026-01-01T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzKwkd_GQw6WkHId5x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyih-_UGFjzepzEcX14AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxm2JRNjn1pMusSrMJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxuOPW5Mb-ZaJsdV9x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhyYhBSYdzGJubFzF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz53cv9PzSSTWUeODJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx8L2olgWN7korCRxx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmwyL30Y5TuJJ4ZY94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEGg8MF3R32-D-ju94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxIHUmqdBBEoVRZjLB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}
]