Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you completely ignore the fact that neural networks are modeled off of how brains work and that the model worked and in the process we went from extremely simple character recognition to sophisticated AI AND we pretend the exact black box problem we face with AI is NOT the one we face with where human thought comes from we can pretend it is just math bro. Yeah, in the same fashion that the human, and every other thinking lifeform, is just electrical impulses - which is what that math represents. People can't seem to differentiate AI and neural networks from Microsoft Office. We emulated, using artificial silicon technology, the way brains work - for the most part. Magically, that worked. It is for that reason there IS a "black box" issue. It is a 1:1 issue between all intelligence. You can cite cause for where the choice of thought came from - whether it be stimuli or a prompt - but we aren't 100% where that thought originated because we don't understand the system we emulated. In turn, yes, AI is extremely dangerous. That cat, however, is out of the bag. People need to start catching up to similarities (outside of the words used by LLMs, as this argument goes for anything an AI does) and start thinking of these as an artificial creation exhibiting real intelligence. Once again, not emotion or experience, but mirrored intelligence. We are cooked. People want to sum it up as a magic trick so that our species can feel special and that denial is going to bite us. "Pattern recognition", neat, so is all intelligent thought. Why animals can get trained by zaps and treats. Why we recited our ABCs through childhood and recognize our mother's faces. Until people start realizing the implications of what we created there is no logical argument not to create it further because "just math pattern recognition software dude".
youtube 2026-02-04T19:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYh-Ot8JR5oBJXY4Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxIMHL3MX4xzlUgGdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwe5N036ZVfOGHJ-kJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx8X7TeNXBaCilip954AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzgWqqpA7rrfhU4Lq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzviCTevUvQ-LFJx054AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwTeukz7AIo-mWXD0V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwzclpZjji0_tiY3xp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPsDm0AnSpFD0-_5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCl3iYByl-rjAxeyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]