Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
NONO NO PLEASE DESTROY THAT ROBOT IM SCARED OF THE FUTURE NOW UR GOING TO MAKE T…
ytc_UgxDOhogH…
G
If we’ve learned anything about economics and wealth it’s that the wealthy who w…
ytc_UgwKelZC0…
G
As someone that works in IT I’m certain that your salary for this time period an…
rdc_hkgg8gl
G
Yes, the term "Real Artists" has gotten murky the last few years. Taping a banan…
ytc_Ugy7YdTAE…
G
It happened in 84 percent of tests where blackmail was the only option. When it …
ytc_UgzAVHrEn…
G
Even if mega corp suede 'em, it most likely do nothing. Company like OpenAi have…
ytc_UgwFtfbyz…
G
This man is a genius but watches BBC fake news like a Bible and does not believe…
ytc_UgyhZS_6m…
G
Extensive overview of the key ethical considerations in AI. These concerns can b…
ytc_UgybLMj1y…
Comment
As a studying psychologist and chemist with a vested interest in the dangers of misused GenAI, I am curious if this safeguard only extends to bromine specifically or if the new guardrails protect against other periodic table group substitutes.
Edit: After a battery of questions against ChatGPT, I was able to confirm it *will* still recommend chemical alternatives outside of the halogen group. Specifically, it seems to offer advice on replacing certain elements with trace elements for health benefits. It also warns you in the same query, but it seems that was the case with AJ as well. If you're looking for confirmation bias from ChatGPT, I feel another AJ could result from a Cobalt or Strontium overdose.
youtube
AI Harm Incident
2026-01-18T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwheI_Afk2Y9SXqGFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfCkTOt7goQutm4YR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxPkkZX5dHsHsFTXB54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHlIhJ_80vdgnJL_x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXQhCMmc2d6NpKA-N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxs3eTT7lMbRFNxWFN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyqHMh7eY0c6hk93cF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0EhOwWH3fsCMZKNZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjZdvEMKWa1lsX_5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXU9fDnUc7LOeT8k14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}]