Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it annoying that the head of an AI company makes AI sound mystical. He knows how this works. He just doesn't want to have to explain it and it sounds way cooler than 1: we developed a technology usually running in pytorch roughly modeled on how we think brains work and called it a neural network 2: the more layers in this network the better the output 3: we developed complex mathematical formulas for the neural network 4: we train the neural network to come to conclusions based on statistical probability when the formula is applied rather than Boolean logic. The neural network is written to tweak itself based on feedback to the results 5: we can write it to try to train itself, which is faster but more apt to problems The only x factor is when it tweaks the settings to yield favored results. But it isn't mystical
youtube AI Governance 2025-08-26T21:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyfijxwmmv5hlP6WJ54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgywdY5lbnhLp5Psx2t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxhQPRH5VWnWGtXv2F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWowV6huXXnqsp7SN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOAhHpHrLu1k64BWZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]