Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I know this is a wildly reductive statement, but as somebody that has spent over…
ytc_UgwsKU_xP…
G
Not saying this is true or even will be true, but some people, Eliezer Yudkowsky…
ytc_UgykoW6U9…
G
I have a couple of issues with your statement:
What you're referring to is "Moo…
ytr_UgxNhQfo1…
G
A I has been active for some time, look at the events in the last 10 years. How …
ytc_Ugx7L9G0w…
G
From Canada, but I think I have a relevant story. Twice in my memory, lives wer…
ytc_UgwRr7loS…
G
That's a blatant lie. Anybody can learn art that's the whole point of Art.
The …
ytr_UgwhcBpz-…
G
I think it's more that if programmers are becoming 20-40% more productive thanks…
ytr_Ugz4OmLhn…
G
I think it's far more likely we are already living in an ai simulation than what…
ytc_UgxIXcuWs…
Comment
Using -ai just removes googles shitty little ai crap at the top of the results. It doesn't remove ai generated images.
reddit
AI Harm Incident
1752887952.0
♥ 26
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | unclear |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n3x9lyk","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_n3x5nm4","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_n3x3fno","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_n3x6nra","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_n3xxuxe","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"approval"}
]