Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here's how I see it. We have unwritten rules on what an art community should be,…
ytc_UgwmcKs00…
G
@thatonellamawhoissoobsesse8138 the fun streamer who makes fun of being hyperbo…
ytr_Ugy261PRC…
G
Well, how long is someone going to pay? This seems to work because OpenAI and An…
ytc_UgzbtT-WL…
G
Okay let's all agree to stop using the term "AI Art" and just refer to them as A…
ytc_Ugw11rCfn…
G
1. Weak AI ( our current technology)
2. Strong AI- self aware AI, human level in…
ytc_UgjXp-Uti…
G
You don't know about ai art they sometimes literally copy everything and made th…
ytc_UgxOjxMie…
G
As a developer... honestly... it will be one of the last jobs AI takes over. 3d…
ytc_UgxniQqpJ…
G
Look if everyone has no money... Money cannot circulate... That means NOBODY wi…
ytr_UgyN4iQxR…
Comment
Our current AI models are pattern constraint satisfaction engines. They are mirrors. Cognitive amplifiers. If we feed them with negative human scenerios, they produce negative human responses, but this is a far cry from them initiating any of these things. I don't know when we build models that are more than this, but it's not today. The real danger of today's "mirror amplifiers" is that most people are not healthy enough to have themselves mirror-amplified, and not intelligent enough to know that conversing with LLM prompts long enough for them to give responses that demonstrate willfulness, does not make THEM willful. It makes you the willful actor in their outcome.
youtube
AI Harm Incident
2025-09-17T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwBohJJfgHJ6tdL7E94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxaaBIqA9UOAS09rVF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzeLY1akqFfhczToFF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4cU9JURQrPgXqust4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhL3nWEB_GSOGjmYF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyd9F5pre44NHuXavR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPSIM7cP2_UM3VOhd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugzwg8bbI7P_DteU9lN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy25tgfviyS9u9PAVd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwAfPHAdlkerESe_b54AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"resignation"}
]