Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love how this thread is equal parts "OpenAI was monstrous by not reporting thi…
rdc_o6ltw70
G
The safety you speak of is for the big brother state. You have done nothing but …
ytc_UgzAOVF_-…
G
I picture it like the old Looney Toons where Elmer Fudd is chasing Bugs Bunny ar…
rdc_k9hwj5q
G
We’re totally on the same page! Just throwing this out there: what if another co…
ytr_Ugx8OAAWp…
G
@jacob.tudragens Yes I have 2 bibles, KJV and the Geneva from 1560. And when you…
ytr_UgxZfDUGB…
G
We should have made a self destruction bottom and destroy all Ai at a push of a …
ytc_Ugw0QDSJl…
G
So you want us to rig the AI so it doesn't offend anyone for being correct? What…
ytc_UgwIw0Cs1…
G
I use chatGPT for work and its still very dumb, it is far from ready to replace …
ytc_Ugz-syA1t…
Comment
Yes and we are actually currently facing the opposite concern. Because of the heating up AI race we have *fewer and fewer time* to do safety testing. Frontier labs are already forced to release models as quickly as possible to the general public with an ever shortening time to do safety tests.
There's a reason why safety experts quit or leave when they can't do their job properly. And it's one of the main reason why Anthropic changed its safety pledge. Because of airheaded people like OP forcing us to release models that aren't ready for public release yet.
One of these days we might accidentally release a model that doesn't refuse to help you design a pathogen targeting specific races of people with just $3000 in home equipment and we'll be in a world of hurt. But at least people like OP got their toy 2-3 months earlier, so it's all worth it.
reddit
AI Moral Status
1773271119.0
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_o9vuvpi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_o9zlsrt","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_oamhlx5","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_o9w3oxi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"rdc_o9y0ohl","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]