Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Question for audiences:
Even if ai was perfected, does anyone have any desire to…
ytc_Ugw6LdPci…
G
That is an incredibly long output, wow, but after stopping to read a couple of r…
rdc_mbnry5r
G
It's now been 10 months since this video has come out - nearly half-time on the …
ytc_Ugwl0e_aa…
G
🙏 Yes , quite scary what these AI generated visuals & audio can do if they bein…
ytc_UgxKSnjuJ…
G
nothing is just invicible or really invincible like non tangible objects
so just…
ytc_Ugz9h36Dp…
G
@Tovenaar13 Really?
Electricity: “it’ll electrocute all of us”
Trains: “going fa…
ytr_Ugzk_52tG…
G
I'm the Biggest AI and automation fan in the world. It is the key to a world whe…
ytc_Ugz-Ydvtp…
G
Lazy politicians need AI to do their jobs. We need to fire everyone in DC.…
ytc_Ugw0qFOQJ…
Comment
I haven't interacted with chat bots much (unless you count some forum posts). My understanding is that they respond inconsistently to adverse prompts and also have a tendency to give in on a topic if you really keep pushing them. Probably because a LLM doesn't "care" in a traditional sense and it picks up on the pattern of what the user wants to hear. For all its flaws, these algorithms have always been extremely good at picking up on patterns. The nuance of "don't always tell people what they want to hear based on these exceptions" isn't as easy to pin down.
youtube
AI Harm Incident
2025-12-14T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzqRhO89FjvV0uhoSF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-zDusOylVZ2cetjV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzdsKrPHztsh_Te-rp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugww16B_sMFOuf8WGKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzW4mgvQwYfXmjFkZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx-pI-oBduZNk_P0cV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYQ0DvG_WgV_oVz6V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQ8jglP7eRbg6kShZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxRLj7pO6iWrmANzgt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzgtdSn2sH8FKsHCHR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]