Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a famous line from an great A.I. movie said.
"We're designed to destroy our s…
ytc_UgyRNzClK…
G
I don’t remember who, but remember seeing someone say something along the lines …
ytc_Ugw7cnL6g…
G
guys I don't know why but after lying to chat gpt and telling the ai this it sto…
ytc_UgwTRrX2S…
G
The computer would make paper obsolete. NO, it did not, it increased the demand…
ytc_UgwGI1Rmk…
G
Seems all the hype around AI is about producing code. As Dave says (3:16) "20% o…
ytc_Ugz0dq-wM…
G
I feel like I'm watching a conspiracy reel. .... if it's doing other more compli…
ytc_Ugy-YPCOC…
G
Ankur please provide proper guidence like resources to learn and resources to ap…
ytc_UgwliONC1…
G
Regulations in 50 states would cripple the United States beyond belief, china wo…
ytr_Ugxy9x3wv…
Comment
Geoffrey is a remarkable man, someone who may have played a role in the beginning of humanity’s end. And yet he’s so wise and straightforward that I believe if he were the one leading the way, we might actually be okay.
Steven does a solid interview, but, and this is the first one I’ve watched, something feels off. Maybe it’s just the contrast with Geoffrey. Steven is such a well-groomed, media-trained man who’s done financially well and talks about millions like they’re pocket change... he somehow feels eerily similar to AI: conscious, but detached from real life, like he understands everything, but connects with nothing. Or something like that. I could be wrong here.
youtube
AI Governance
2025-06-17T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWiuR2kRc-yDAoG7d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFMSkdP4ggOuWhYl54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjqEb5ZfZ6Gmwere54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6PE_qUnUnp5VOukd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxAJvnPFvM805qLJ8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycijC5bEROdpDqsE14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwe19egIF9EJcmbMXx4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEdeqTyJ_-8gDyLSp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfcGg5veCP1r4UaJV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzJQYT0WKLgtC3UHfl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]