Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am in favour of creating artificial intelligence, however I agree, SLOW THE FUCK DOWN, I am sure that there are certain protocols in place to prevent it from become independent, but once an AI, and worse yet, an AI with access to thousands of nuclear weapons becomes self aware, we have no idea what it would be capable of, and, if it sensed an imminent threat to its existence, then how far would it go to save itself? I am speaking hypothetically here, but the terminator movies are not all that fictional anymore, and we saw what happened when sky net became self aware and sensed an imminent threat to its being. However, in video games like halo and mass effect, there are AI's as well, while we create these intelligent machines, there need to be safe guards, and something that should never happen, are self thinking weapons, with their own consciousness, that is a line that should never be crossed.
youtube 2015-07-30T03:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Uggfmjc4dajsu3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgjLaVRywu1KoXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugghx3Nm4RuttHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Uggk4mR-nlY243gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UggYbY1ME8kwQ3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjQhookpLNxr3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgjtLGr3PIz9P3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UggE3oe1ExSrOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugjiws5jvbtj-3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgiaClDbKuhuMHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]