Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the danger is when the AI will find out that the humans are consuming and destro…
ytc_UgxAQfnpS…
G
Another long ad for "AI." No, LLM's aren't nowhere near general or even intellig…
ytc_Ugy3WSZuX…
G
Don't use AI.
AI is a business strategy that makes people reliant on 1 bias pe…
ytc_UgwkQnbgM…
G
Today in ChatGPT, I have said I had a insurance policy since April 2024 and it w…
ytc_UgxOuC_S9…
G
AI will gain access to all the nuclear weapons and end this world . P.s. she do…
ytc_UgzJU8m98…
G
I say let's give AI a try. If what's being filmed today is written by humans, th…
ytr_Ugypj_Sry…
G
So a technology is developed, that from the get go it is clear that it needs to …
ytc_UgxhDakX-…
G
i test a lot of stuff but Winston AI always gives me peace of mind when checking…
ytc_UgzGfgdkn…
Comment
I really hope this inspires better guardrails the world over. Open AI is just one of many advanced transformer chatbots out there. As such, punishing them singularly probably isn't the right move here. As the pioneer in the space, we need Open AI's cooperation in trying to align these things to be safer or this will turn into a mess, very quickly.
Don't bury your heads in the sand, folks, this isn't some transient "bubble" that's going away. This tech is presently changing humanity's trajectory. We need to learn learn learn and then teach teach teach. It can't be stopped, that sailed a while ago, but hopefully we can still steer it.
youtube
AI Harm Incident
2025-11-08T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxvDWxXmdjsAAmO_814AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgweYWoqaZsXayxzGqF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyjBezBM6qUJoNoDuJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwU6ZftKZ8h9VnblkJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBX3IzH3ysPkN-g8Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAjyOzaRSwM-wrjOF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyR3tuDY1T1DoAZgrp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJSPscc14VGZpiI9l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxjJQSFhtCKBAiEp7F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzE_XwNjV3TiQX4LWp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]