Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is a diabolical and sad thing to happen, and yes i think they should be ban…
ytc_UgyLk4KTt…
G
Waymo's are constantly trained every single day so this situation was already a …
ytc_UgyKyzYKm…
G
Are they not totally missing the point: we will maybe skip the robot phase en go…
ytc_UgwZfeNec…
G
So true, I also hate so much when people turn their pictures into art or somethi…
ytc_UgxI_PXxm…
G
Blaming AI is an extremely useful narrative for people who build and deploy syst…
ytc_UgzrzTnhG…
G
Everyone here acting like most human art really has some kind of deep feeling an…
ytc_UgzQoDFXF…
G
So the solution will soon be that all electronic and satellite communications wi…
ytc_UgyL5TB1K…
G
Claude on my phone doesn't even have automatic memory. I literally have to remin…
rdc_o7x2myz
Comment
Get consent? Get consent from a machine? Get consent from a generative transformer? SURE! I can do that each morning in about 250 milliseconds. I'll throw together a little API call that generates approximately 500 GPT-3 responses to the question, "Do I have your consent to work with you today?" Then I'll tack on a search that looks for at least one response out of 500 that is in the affirmative, and I'm off to work! If it ever returns a negative response, I'll just increase the response count to 5,000 and run it again. I might get a negative response once a decade or century. Great idea. Get consent. From a machine. FACE PALM.
You see, these machines have no knowledge of ANY of their previous generations. They are not actually sentient. There is no consistency between responses based on other responses. There is no concept of truth or understanding. It is a machine.
youtube
AI Moral Status
2022-07-04T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxT9rDjqb5T-CoHRed4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyOs8HkwGLrk918Vd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxt9XvaOvWCwo6_rDR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz13iGHbLm7cgMn2xh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzEd4-XvErHdan48rx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]