Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@David.AlbergWhen I say "they", I'm talking about ALL of these Technocratic T…
ytr_Ugwe8mh5y…
G
I wouldn't worry, if it were the case it wouldn't be as primitive and needy as a…
ytr_UgxZIEB4d…
G
Every human being and ALL your digital interactions on all social media platfor…
ytc_UgznDzwi2…
G
@GharwalaSwadbyRaani
It’s what am trying to get everyone to do brother! The wars…
ytr_UgxtH8G4g…
G
AI's main use rn is putting original creators out of profit.
So, in time no cre…
ytc_UgyVhnmf6…
G
Regardless of what it says, personally I don't believe something has consciousne…
ytc_UgwdBpmFK…
G
Everyone responsible for ruining society with ai belongs in jail. This is worse …
ytc_UgyWVcJRg…
G
One thing a think AI slop is giving AI, is the ability to make more realistic re…
ytc_UgyI6fQ_t…
Comment
13:00 the bot actually doesnt know what people did in prior conversations in their chats, so it cant actually confirm! and also its context is full with YOUR chat history when it chats. only that exact persons account may have at least soe part of that chat loaded in context. ai doesnt have enough context length to store multiple people's chats
youtube
AI Harm Incident
2026-03-02T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwXCeONdVPEAE4NdgF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxERQ5OQXBCLNdwvCt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRrZAZ9xQ3jnnmRqh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzGbbYD1Gci3wHosjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKHqymmg0V1vliQxh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz0n9fa9cLgR5DCAw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7d2i4TmbS8kTg4UF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeSVN-ddHWYuwWovh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHmKIvC4mtw8VCl7F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwdSivj0gRJPEcR7nF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]