Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only slightly related, there is a great trend right now, which is to ask the AI …
ytc_Ugz3PE4mj…
G
Also which AI? Consumer free ai is the worse kind and today is the dumbest it wi…
ytr_Ugzz06pgq…
G
"We believe a ban that's difficult to enforce is better than a world flooded wit…
ytr_Ugx_Dp7z_…
G
As someone who is actually disable and has a prosthetic for an eye I can't belie…
ytc_UgzEzj9Vg…
G
Ignorant statement showing how a rich guy is out of touch with what labor is. U…
rdc_kih0pvh
G
Yeah, the conclusion is only based on FAO's compilation of research whose scope …
rdc_eh43wo7
G
2:25 or the reason they use the chat bots is because they have been pressured in…
ytc_UgwOg2Lqd…
G
Ai builds nerouns we are biological ai. A baby stubs his toe and says out that h…
ytc_UgxJMQnA-…
Comment
I was talking to ChatGPT and sent it the video title, and here's its reply (
I get why this video is scary — it’s built to grab attention. Some leading researchers do warn about very serious long-term risks from advanced AI and call for urgent safety work and governance, which is worth taking seriously. For example, groups of researchers and leaders have signed statements urging global attention to extreme AI risks.
Center for AI Safety
That said, there’s a big difference between “could happen” and “is imminent.” Surveys of AI researchers show wide disagreement: some assign non-zero probabilities to catastrophic outcomes, others see much lower odds or emphasize nearer-term harms we’re already facing (misinformation, surveillance, inequality, job disruption). The debate is unsettled and uncertain, not settled doom.
AI Impacts
If the topic worries you, the useful response is not fear alone but supporting real fixes: fund and demand transparency, regulation, safety research, and international norms — and prefer reputable coverage over clickbait. There are lots of people and institutions working on concrete safeguards (policy groups, standards bodies, and lab safety teams). If you want, I can post a short, sourced summary of mainstream reads on this that’s calm and factual.
NIST Publications
)
youtube
AI Harm Incident
2025-09-13T08:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw7-1JwWyhMtnjmdbB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzu1_1Nduedekejn-d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvIUrR2i8N00mf2sd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxuc_VPuVaKhTFZ__d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJfl0kdHVI5-QiSI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyS-NPoInJYwOSlzI14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpPpQTEHiXRwyAm7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-hbWnCRcOFBY4L-l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXap5T6VVjZQHW0oV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzW2ZIhg3asUuyE4Bx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}
]