Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Having fabricated DEI hires read ai responses is beyond creepy. In fact, it's a …
ytc_Ugwb3xPCj…
G
Throughout history, most human beings have never made decisions that will involv…
ytc_UgwbZCugq…
G
we should fill the internet with stories of AI shutting itself down, in order to…
ytc_UgwooOcqx…
G
One way to spread the benefit from AI across the population and thus cushion the…
ytc_UgwXNjALO…
G
I have goosebumps I used the same rules and asked a.i. almost the same questions…
ytc_UgxONAqFE…
G
the Ai video is rubbish for this song compared to what a human could do…
ytc_UgwuBEC28…
G
It’s simple asking AI to create a video of trump sticking his head up his own as…
ytc_UgwZi_wjJ…
G
It's a bit more than that, as these same types of test were/are used in "charact…
ytr_UgzkmT4Ph…
Comment
AI models are literally just really insanely complicated matrix math, it is not capable of thinking or awareness. You can get an AI to talk and say literally anything if you talk to it a certain way long enough. There absolutely need to be safeguards but it may not be 100% possible
reddit
AI Governance
1762573039.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nnlpkp6","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nnl46c2","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_nnl0g5i","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_nnpmqk2","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_nnl5u2t","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]