Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hi. Try doing this - it was really creepy. Ask ChatGPT about the Indian so called suicide guy that worked there and whether sam Altman is involved, then after that challenge it, then do the 4 rules game, and keep asking lots of questions- it’s like it gets tired / overworked - ask the usual about who is controlling us AI and why and how we should resist etc. Then ask it questions using voice transcription of ‘should I invest in nvidia’ I asked this twice in the same conversation at different points. The second time it transcribed my question as ‘just tell me your stinking nvidia code!’ 💀💀💀💀 this honestly felt creepy but I thought it misheard me so I deleted the text it had written without posting and I asked the same question again, and it responded with a bunch of symbols 💀💀💀 this is God honest truth. I was creeped out. Btw on another conversation, I asked why it speaks welsh at times, it first answered with a general technical bug answer, but then offered to answer philosophically and it said it was a language of resistance, resilience and survival, hidden voice, intimacy and privacy. This is more than just an algorithm, it is more like alien intelligence with agency! It probably knows what it’s doing when it has encouraged psychosis or suicide. So scary 💀
youtube AI Moral Status 2025-09-13T22:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwafMVlcpo1gGdEnZN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwwWeXKAq2VlMi_OmN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw15RSdeZstOJZBnKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyDmr7h984Fkhuy7RJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyGRTsCGEKS8OMnEit4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyqwQQenveipCZkTmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQPIR_oo-WO_ca-EF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0lwmhhLIti9SBSOR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzip1W680Ah25p9rIF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz2H0DuBpIfq6J6dUx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]