Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
we should shut all ai down. but the people in charge are too dumb and greedy…
ytc_UgxHxsbGq…
G
Most these business owners are weird, I don’t blame Ai tryna expose them 😂 it mi…
ytc_UgyPrhN5P…
G
Then in terms of the moderator he should ban everyone since the way of the world…
ytc_UgzPyfj6X…
G
The first problem with the notion that AI will never become conscious is no-one …
ytc_UgwGE_N3y…
G
Great interview and the point of strictly regulating AI development by any coun…
ytc_UgxeJFPMN…
G
Given how poorly (safety-wise) the bat bomb went during testing, I can't see aut…
ytc_Ugx6im3wW…
G
Great video. Using social media helps AI build a profile on us that knows us bet…
ytc_UgwQiipNZ…
G
I'd laugh if someone recorded themselves actually replicating that AI masterpiec…
ytc_UgyT_8Hl2…
Comment
This video isn’t a real whistleblower leak. It’s a speculative YouTube production with no evidence behind any of the claims. There are no documents, no sources, no insider confirmations, and nothing in the video is backed by actual AI researchers or organizations.
Real AI experts talk about things like job disruption, misinformation, and the need for regulation, but nobody credible is saying AGI is secretly arriving in 2026 or that some hidden catastrophe is coming. That’s just fear‑based storytelling.
These channels mix real concerns with dramatic editing and hypothetical scenarios to make it sound like a warning, but it’s entertainment, not a leak. If something this big were real, it wouldn’t be revealed through a random YouTube video with no verifiable sources.
So yeah, interesting video, but not factual.
youtube
2026-04-22T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy5MFN7Z6Ctyuw94Ml4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7n7mA0c_U2laqSNx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVc2pHOtIONMwtiEF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugw5fL6hXbQrZNHTfkh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_v5WOqbfTKRT8Ws94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlqSrSjBMS0dNfGmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0vxZS8QcNKz_GlMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCGKTq9K2fCeLkZMp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyTy-59cvMzLQ-lIDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugz900JNKncrCPzBdbp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]