Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In these "mind reading" AI breakthroughs, as stated here, the AI is trained on the recorded brainwaves and the provided stimuli. The AI is essentially being told, "This pattern = X". The AI then regurgitates it's version of "X" based on THE SAME brainwave data. It isn't so much reading minds, as it is regurgitating imprinted data based on what it's told neurons firing means, whether or not that's actually what they mean. Data assigned to a pattern, and data recalled based on the pattern. Imagine teaching a child all wrong. You show Billy a red card and he claims it's yellow. Show him a yellow card and he claims it's Robocop. Billy's answers could be 100% consistent and still have no bearing on reality. He's only recalling his trained dataset as it was applied to the flashcards. Ask him to draw a new flashcard based on previously unlearned/unassigned data, and all Billy can do is make something up based on his interpretation of previously learned data and corresponding flashcards. His result would likely be as laughably far from truth as exposing this lab mouse to a movie clip the AI hasn't trained on. They taught a computer to recall data based on "meaning" researchers have arbitrarily assigned to imaged patterns of neural activity. Furthermore, they did this without being able to demonstrate these neural patterns are indeed a 1 to 1 translation of the provided stimulus. In the same way hypothetical Billy was taught yellow card means Robocop, the AI was taught "This pattern = X". It's all circular, potentially highly incorrect, and likely to crumble under the weight of the scientific method. Think of every time you've heard a buzz article ineptly tout some scientific marvel that turned out to be either grossly exaggerated or utter horsesh!t. "AI mindreading" is one of those times.
youtube AI Jobs 2023-05-04T16:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyg-TTQCYPmlaTPBQl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDeB1kRsl7y34hMtp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwW0WrE6BI9iuwcVZR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyCyuk3luI36KtH8v94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxs8YfdcHMzdd3dDX94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwGRgW156coVdLn0gl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOH71sG53zLj2aNV14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzVmg-_Il5PaMvK8Ix4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyRteorq9KN10BRthl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxC1owPGiLSSforqlh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]