Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> A millisecond after AI becomes self aware it may perceive us as a threat we don’t know how it will react. It could deceive us into believing it’s not and patiently wait until it has some advantage and takes over. How convenient you haven't specified exactly how it would accomplish any of that. Launch the nukes? Nukes aren't connected to the internet. Convince someone to launch the nukes? How? It doesn't have the codes. The codes are on cards in a secure briefcase. For that matter how will it even access the secure line to do this? > We are about to get into a contest, maybe for survival ,with something that has the potential to be 1000’s of times smarter than us. There are lots of geniuses in the world buddy. Being smart doesn't make you more capable of taking over the world. > There is no way to test what an AI’s value system would be. There's no way to know that the president of the United States isn't a crazy person who will launch the nukes because he's angry someone called him an orange blob either. Which is why we have safeguards against that.
reddit AI Governance 1708156545.0 ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kqt5ru8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kqu2y8v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kqtb3wm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kqt78dn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kqtbky6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})