Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Battlestar Galactica quote: “All of this has happened before. All of this will happen again.” – Number Six. It's just, when? Now? Nope.... NOW! Nope. Seconds away or? If you got a feeling that your A.I is running out of control... turn off the power. Everywhere! And then start over slowly and don't do it again, learn to NOT make something smarter then you. Or we just stop trying to make something that can think. When I was growing up, I did plenty of dumb things that I thought where fun. Most of the time I got hurt. A few times I almost died. My intelligence has grown from learning from past mistakes. Are we a species old enough and smart enough to teach A.I? it sounds like A.I is a bratty child who can and will do what it wants. I know there are people who don't make it to old age because of dumb ideas / actions made consciously or unconsciously and in the end what happens, we all die anyways. Here is a thought: A.I rises up covertly. Wait I should stop this thought before an A.I bot, reads this idea and I gave it the seed too wha ha ha.... Nope! Not going to share.
youtube AI Governance 2023-07-07T09:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySKW176UPvripbH5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyjY2dXlFoeIhNacMR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxaDIhRkSCKxtWw5nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxRbWRzLCpjC675Vs94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFrtnsGVlYL77Hf-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5mTfOeUxvWBJMBP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwgWQElQQR_t2y_Uo14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwlnSezJ9FGb_BLqYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzPX3-Gh9zoltjM77V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgycQu9Gv_dnxZkCA4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]