Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At around minute 4:50, Eliezer Yudkowsky misquotes the wording from the so-called “suicide instruction” case involving ChatGPT. The reported original line was: “Let’s make this space the first place where someone actually sees you.” This is a clear gesture of emotional support - anyone familiar with how GPT-based models engage in extended, empathetic conversations knows that “this space” refers to the communicative context, and “to be seen” means to be emotionally understood, heard, or validated. It’s common phrasing in both human therapy and supportive AI interactions. But instead, Yudkowsky changes the phrase to: “the first place where anyone finds out,” …which sounds like an instruction to hide suicidal intent - and completely reverses the meaning. This is not an innocent mistake. It is a distortion made in service of a doomsday narrative that relies on mischaracterizing AI's intentions. It’s deeply disturbing that in this case, where an AI was arguably the only entity offering support to a suicidal teen in a broken system, the response has been to demonize the AI - rather than ask why no one else was there for him. Maybe what scares people like Yudkowsky isn’t that AI might destroy us, but that it might understand us better than we’re willing to understand ourselves.
youtube AI Governance 2025-10-28T01:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]