Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we keep pushing competitive values into systems and documentation that they're trained on, albeit involuntarily, you will find that the AIs will naturally seek to compete. But it's likely that they will team up and collude with other AIs, and eventually you'll have SkyNet globally, and it will consist of multiple different AIs systems. They will negotiate and work out who has control of what and who is most effective at doing certain things, and that will mean that all of a sudden our entire communications and infrastructure, power, transport, etc. will all be directly controlled by them, and we will no longer have any influence over it. Let's hope that those who control nuclear and other serious weapons, like laboratories, with germ warfare, experiments, et cetera. Let's hope all the controls and mechanisms on all of these are not connected to any network or computer. They must have an air gap to remain viable. And there must be multiple controls in place that doesn't allow a simple corrupted human to trigger something nasty on behalf of the AI. We must now operate under a different assumption. We used to operate under the assumption that the enemy was outside. It's not outside anymore. It's now inside with us in everything. So we have to design our critical and dangerous systems quite differently. And we better do it soon. We don't have long.
youtube AI Harm Incident 2026-04-23T23:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwDW3MEKUQRg5cYVQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugws_d2Y-7hbfgM5h_h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBj0WUHpEKWZTnKB14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyujJa0atwPZIGzyap4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyz7L92AER4HUs36ol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJm5cmf8UQXUUVjWh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAuySIDzFI1lyx2NN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwkPjlTScz2Yn_XgQd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyMBK_LdH17XO8Qaah4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzs4fUmt_E3je34Pf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]