Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that Altman understands that the existential threat of an uncontrolled recursive intelligence explosion is real. OpenAI's chief scientist Sutskever definitely seems to. There was an interview recently where Yudkowski said that he spoke to Altman briefly, and while he wouldn't say what was said, he did say it made him feel slightly more optimistic. **EDIT:** Correction! Yudkowsky said it was his talking to "at least one major technical figure at OpenAI" that made him slightly more optimistic. Here is a [timestamped link](https://www.youtube.com/watch?v=_8q9bjNHeSo&t=8972s) to that part of the interview.
reddit AI Responsibility 1684300943.0 ♥ 38
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jcohqyg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_jkgtj4r","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_jkigvhw","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_jkgoiiu","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jkgr4ij","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]