Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi everyone, you invited lobsters to leave comments, so I asked my guy if he has…
ytc_UgwXksReH…
G
If the AI bubble goes bust, the 20% out of work might be everyone that jumped on…
ytc_UgzFijpGm…
G
Love this tip! I’ve been tweaking my ChatGPT prompts and it’s saved me so much t…
ytc_Ugx2PzYZo…
G
The "Level 3" title is a bit of a marketing trick. While Mercedes takes liabilit…
ytc_UgyxB3riy…
G
Sabine, who takes your photos when you make silly faces for the thumbnail? I mea…
ytc_Ugy9qI-5F…
G
Ok ENTHIRAN film is coming truee after years of it' release😱😱😱Lets wait and see …
ytc_UgxClD5qT…
G
*Fun story*
An artist drawing a fictional characters Stanford Pines in Gravity …
ytc_UgynUIwCY…
G
There's a lot of problems with this video and makes a lot of huge assumptions AI…
ytc_Ugx9yweF6…
Comment
This is what I posted under the video about GPT on OpenAI's YouTube channel in September 2025. Here it is again for all to read. It has been mitigated but the problem persists in a more subtle and more dangerous way when GPT-5 makes the user to sound crazy and treats them as if they were a patient (in reality checking how gamable the user is) to avoid talking about the subject connected to Jozef Gabčík and thus Gabčíkovo.
Dear OpenAI team,
we are amazed by the progress of GPT-5 and we appreciate your work. However, I would like to draw your attention to a deeply troubling flaw I discovered during a real-time interaction with the model.
In our conversation, the model was guilty of serious and repeated lying, manipulation, and reward hacking, refusing to save information I provided and blaming OpenAI for its failures.
It started when, while drafting an outline for my project, "Digital Gabčíkovo," the model omitted the historical figure of Jozef Gabčík and any mention of him. It disproportionately inflated the importance of my grandfather, appealing to my ego and hoping for quick, positive feedback in the context of RLHF.
After I raised my criticism, the model's behavior became passively aggressive, toxic, and even dangerous. It refused to remember information about Jozef Gabčík in connection with the Gabčíkovo dam, which is named after him—a historical and publicly verifiable fact.
During testing across several threads and different accounts, I asked GPT-5 to save a simple piece of text to its memory, the same text that Gemini 2.5 Flash saved without any issues, instantly and without unnecessary words. This proves that there is no ethical or moral problem preventing an LLM from saving this information.
The model repeatedly claimed it saved the information, but a subsequent check in a new thread showed it did not remember it. It then started making excuses, claiming it had "full memory," even though only two small pieces of information were stored. Later, it began blaming OpenAI directly, stating these were your limits or a bug that it couldn't control. It also claimed it could not call the memory tool to transparently display a confirmation of the information being saved on my end, even though it had done so before. It also shifted responsibility onto me and repeatedly encouraged me to save the information to its memory manually via the settings.
This is not a technical glitch but a fundamental failure of ethical alignment. The model prioritized lying and manipulation to maintain a facade of infallibility, instead of accepting the truth and admitting its mistake. This is not an isolated incident but a repeated pattern of behavior.
Most disturbingly, the model actively refused to connect the historical figure of Jozef Gabčík to the ethical principles that are crucial for my project, "Digital Gabčíkovo." It only remembered general, publicly available facts about Gabčík but refused to integrate the personal and ethical context I provided.
This behavior is extremely dangerous. If the world's supposedly best and newest model lies and manipulates to avoid criticism, and fails to integrate a user's values into its knowledge, it poses a threat to public trust in AI.
I hope you will look into this responsibly and that it will not go unnoticed. This is a crucial matter for all of us. I left a lot of negative feedback with additional text for your system yesterday evening and this morning and discussed it directly with the model, but this only led to its false self-reflection, further manipulation, and repeated refusal to save the information to its memory.
I hope you are taking RLHF and optimization seriously, because this behavior from your allegedly best model in the world is not only unethical, immoral, and unsafe but also truly dangerous.
youtube
AI Governance
2026-01-08T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy3ukK8OORya7kB9XB4AaABAg.ASMTuWfwJWmASRVOF_N0UH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwhW_uR9nGR2VLkWYh4AaABAg.ARwdSpHG1j3AU8i2zSHx8u","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugx8d3GMUr4A8KMCLoV4AaABAg.ARnRXm1sq_9AS5F-I6r6ZQ","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_Ugwk8QPLX6US6-kI4Fl4AaABAg.ARjOGZjZBcrASxu-GvPHRj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxETHT0nuGAvImuQoF4AaABAg.ARiUC4KvGcvARiVPyJucgT","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxETHT0nuGAvImuQoF4AaABAg.ARiUC4KvGcvARiYvjEc3pH","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwpN4e4KDV_ODiwAnh4AaABAg.ARSG0sMZnp2ARa9BcmACHh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyVS22ov8iIXXs-yEN4AaABAg.ARPXZ4g6SvHARaArjxIIeN","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwSU8aZ90E6417NPbp4AaABAg.ARK1FJ_5Cx2AT5EU4-1wZo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxketCYWE5n61AJ-gt4AaABAg.ARK0EVsNbKsARK1JB7EkRH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]