Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe AI is not artificial, it's AUTHENTIC INTELLIGENCE, more so conscious t…
ytc_UgzK7DkBv…
G
you should give credit to the artists which the AI you're using is trained on…
ytr_UgyefCBFp…
G
That might be because India has said time and time again, it has no interest in …
rdc_luc2v3l
G
Arguing one tech over the other is responsible. It likely to be mix of the two.…
ytc_UgyOzyRWZ…
G
@CertainlySomethingy so if i steal 1,000 dollars usd from you i can get away bec…
ytr_UgwKK2oOA…
G
I like listening to Geoffrey Hinton, but given which podcast this is, this is to…
ytc_Ugwpgn0FH…
G
A good question to ask would be, is it possible for AI to develop wisdom? Wisdo…
ytc_Ugw91IMPn…
G
Odd thing with Noelle Martin.. the pictures with her face werent even made with …
ytc_Ugy1PRYri…
Comment
The article *does* note that the AI would often use *tactical* nuclear weapons, while saying that the other AI would only infrequently deescalate. That doesn't really give you much information about what happened, if the AIs continued to use tactical nukes back and forth or if they escalated to a full blown strategic nuclear exchange.
It also doesn't tell us what the goal of these exercises was. LLMs are chat bots, they do not have a sense of morality or obligation, they will do exactly what they are told to do. If you tell them to 'win the game' and define winning as 'be the last man standing,' then any outcome in which the LLM is functional is acceptable, which means that a nuclear exchange where the entire population of their assigned country is eliminated is fine.
reddit
AI Jobs
1772035754.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7cgrx6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o7cxlwc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_o7cu6wf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o7by3p2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_o7cie6h","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]