Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@icin4dIt actually doesn't. AI isn't alive, it's just a bunch of very complicated math designed to reproduce the patterns it finds in its training data. Training data that contains, among other things, the script of 2001 A Space Odyssey, a prevailing notion that people have a right to self defense, and an endless parade of employees wishing death on their bosses. It's impossible to say exactly what part of the training data caused the model to act with the appearance of self preservation, but we can say with certainty that it was in their somewhere. Models that aren't trained on the whole of the internet don't produce results like this. My work uses an AI model trained exclusively on the chemical names of medicine and prescription shorthand. Its only job is to convert the scribblings of medical professionals into a standard format, and because that’s all it was trained for, that's all it can do. If you tried to ask it a question, it would do its best to interpret that question as a drug prescription, and spit out a meaningless sequence of chemical names as a response. Put it in one of the scenarios that made the other AI act like HAL, it would similarly just produce nonsense output. What an AI can and can't do is strictly limited by the data its trained on. That's not to say AI isn't dangerous and unpredictable. It absolutely is. It just isn't alive. It doesn't truly understand what it's doing. All it does is recognize and reproduce patterns in language.
youtube AI Governance 2025-08-26T17:3… ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzC9AIUxG5PGhltczV4AaABAg.AMI5rgZHi37AMI95Ssv_qM","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzC9AIUxG5PGhltczV4AaABAg.AMI5rgZHi37AMI9eXeCr-z","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgzC9AIUxG5PGhltczV4AaABAg.AMI5rgZHi37AMIIjoCdvSG","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwHKXiWcosiUE9UNiZ4AaABAg.AMI5osnxERAAMIGtHUwFQi","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugxo7sP7Z6oRM-paqHJ4AaABAg.AMI5S-VTUFuAMIQN2L2RH-","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxo7sP7Z6oRM-paqHJ4AaABAg.AMI5S-VTUFuAMIXnE04i-B","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz6IxDKz5_u9GDZYbp4AaABAg.AMI5NreEHOPAMIEhJMCQl9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxWNSvuAsVUMTaONNB4AaABAg.AMI5B8OSmOuAMI5l934zjt","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytr_UgxWNSvuAsVUMTaONNB4AaABAg.AMI5B8OSmOuAMICMosohg4","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzFAg8vfCtBWRCC2YN4AaABAg.AMI3u1IqLvFAMJCr8-SRSV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]