Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any job that you do remotly with a computer will likely be replace by ai soon. B…
ytc_Ugx9GurU9…
G
As an instrument/automtion engineer whole my life,..it is all absurd around AI. …
ytc_Ugw8SkRmU…
G
@Mark S Not if we accept robots and let them live among us, imagine having as ma…
ytr_UgwpOylo4…
G
It's a bit hard to tell another country that they shouldn't have the most powerf…
rdc_dkzwr37
G
When the foreign government screw the US. U lot will not go for AI but rather sc…
ytc_UgxCveQGZ…
G
Grok isn't the smartest AI in the world. It spews a couple facts and the rest a…
ytc_Ugy4Y7VbD…
G
ai is amazing but you need to use it right. Use it like a teacher for learning n…
ytc_UgwSuQ8oz…
G
Getting a foldable ebike is actually the best option, especially in a warm area …
ytc_UgziL7d3b…
Comment
AI's are easy to confuse. Bromide came up, and the AI associated bromide cleaning. So it started chatting about cleaning. It didn't make it clear that it wasn't talking about food anymore because it's always trying to impersonate a confident human being, and a confident human being doesn't say "uh, are we talking about cleaning now?". Combine that with someone who's looking to confirm something they want to believe, and it's not a good thing. It takes a human to stick to the topic at hand, know what's important, and know what's right.
And the whole "ChatGPT" versus "I" thing is pretty simple. The ChatGPT 3 program most likely doesn't run continuously. Each time the system gets a chat request, ChatGPT is run just for that one chat. That's the "I". Once the chat is done, or if it's abandoned for awhile, the run ends, and ChatGPT 3 forgets about the chat. This is a simple way to keep stuff from other chats from leaking into the current chat.
youtube
AI Harm Incident
2026-04-20T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyq7F8uKd4-q6H9KVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-sACa30q38aUCiER4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzQVy8xXvsbGgG35HV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHXFYLZSlUeXxCJLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCux2GKQxk0BvIrGx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQEUGuAWwaCn8fOFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhvOum004-Hp6wjCF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3Gknio5-FAbynV4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKcZPI7CfR7CFmqCJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxi85BHGv50ld_SYnV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]