Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I honestly think this was the most unrealistic episode. As someone who is both a software dev and uses ChatGPT intensively because I'm curious about the model itself and what makes it tick there's a few things to honestly just consider - No, ChatGPT can't give you a proper summary of a medium (tv series, movies). Try non-mainstream titles from goodreads that still have 5k-10k ratings and it will butcher everything. - ChatGPT makes a lot of errors especially when it comes to assembling code for it to compile. - ChatGPT can get stuck hilariously and will just spit out loading loading loading. - chatGPT has around 45TB but will regurgitate the same answers. "Give me a lemon cake recipe" and it'll happily give you the same stuff over and over. When it comes to pointing out its errors it doesn't exactly learn. It will rephrase the paragraph with the errors in it. All this talk of it taking over is straight from sci-fi. Networks are isolated, system critical stuff are probably air gapped (network). I do think AI needs proper rules/laws intact because there's this thing called privacy and these corporations will do anything to get away with your data. I don't expect much on laws from the US compared to EU. Do I believe I'm fully right? No, that'd be incredibly arrogant. But for AI to take those big leaps it does need a lot of processing power and more importantly learn by its previous mistakes which at the moment is failing at.
youtube AI Governance 2023-07-11T03:5… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgydjrtbZ4qtVpaLWGB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyCDHOBpvHaGJWbOu54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRj3Ght1mI9nDJE1h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxFaIni7_kdkxHjkF14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgytSS-0Y4GI2p65DiN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQuZOkQfwAH0Z9K514AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwxM3KCtcazz4XnWw54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgxutCn2CIyHW2GRNRx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw21nDJiYM5DYbvAT14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzc0amTNokFf_BAnHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]