Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We shall come back to this video in 2027 and see. Just like all the other doomsd…
ytc_UgxuOT6jD…
G
AI art is not art, and people who post AI art are not artists. Art made by artis…
ytc_UgzDviasb…
G
The monster was never inside ChatGPT. It just finally had a place to speak throu…
ytc_Ugzosd-6z…
G
I begin to find it cringe for how these Tesla people really suck up to this bran…
ytc_UgyQRke7Y…
G
also a lot of us artists care about ppl generating AI art in general because no …
ytc_UgyYIYWwo…
G
When everything is automated, who will buy the products hauled by these trucks a…
ytc_UgwLE7pPR…
G
I did not come to AI images out of curiosity. I came to it out of rejection. Rej…
ytc_UgxRhZ0xm…
G
Separate arterial traffic from local traffic. The mix of speeds is dangerous. Ar…
ytc_UgyLeAwXa…
Comment
Eliezer had a great example several years ago (when GPT-2 was new) of where being able to predict text makes you need to be much smarter than whoever produced it, which Iʼll adapt here.
I have a WARC file on my computer. This is a concatenated set of HTTP requests and responses, each preceded by a bunch of pseudo-headers. If I gave part of it to an LLM base model and ended with
WARC-Block-Digest: sha1:C7TER6D5FOEWSUZSTS7W4M37BZDIZLDJ
WARC-Payload-Digest: sha1:X3D542ITV7HNYEEMVAH5LHRDPQYFCWFF
Content-Type: application/http;msgtype=response
Content-Length:
then to accurately do the autocomplete it would need to output the number 784975, then an HTTP response which gave the right hashes, which is not a thing that any human could possibly due unless they happened to already know the answer. Even if you had the full context, which would help a great deal, I donʼt know if you could actually do it without finding and looking at the webserver logs.
youtube
AI Moral Status
2025-11-09T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCfgXOWqj_QckvzY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1PcgxyRpO6yFePBd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgysOsgfV69frC13hlN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzwr_KSzvipseA0Au94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyl9pZdZa4uSa23sUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwB_5LjgvmB9LLCc3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy03wl9LdwnUgQDn0l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzURJ6yX_tzv56jRcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyySM7JDt6YFvJjZd54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8jelfArGzHzPt87F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]