Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was talking to ChatGPT and sent it the video title, and here's its reply ( I get why this video is scary — it’s built to grab attention. Some leading researchers do warn about very serious long-term risks from advanced AI and call for urgent safety work and governance, which is worth taking seriously. For example, groups of researchers and leaders have signed statements urging global attention to extreme AI risks. Center for AI Safety That said, there’s a big difference between “could happen” and “is imminent.” Surveys of AI researchers show wide disagreement: some assign non-zero probabilities to catastrophic outcomes, others see much lower odds or emphasize nearer-term harms we’re already facing (misinformation, surveillance, inequality, job disruption). The debate is unsettled and uncertain, not settled doom. AI Impacts If the topic worries you, the useful response is not fear alone but supporting real fixes: fund and demand transparency, regulation, safety research, and international norms — and prefer reputable coverage over clickbait. There are lots of people and institutions working on concrete safeguards (policy groups, standards bodies, and lab safety teams). If you want, I can post a short, sourced summary of mainstream reads on this that’s calm and factual. NIST Publications )
youtube AI Harm Incident 2025-09-13T08:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw7-1JwWyhMtnjmdbB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzu1_1Nduedekejn-d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvIUrR2i8N00mf2sd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxuc_VPuVaKhTFZ__d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJfl0kdHVI5-QiSI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyS-NPoInJYwOSlzI14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxpPpQTEHiXRwyAm7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-hbWnCRcOFBY4L-l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXap5T6VVjZQHW0oV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzW2ZIhg3asUuyE4Bx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"} ]