Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm no expert, I'm only a CS grad student who is trying to finish my master's before President Trumpster Fire makes it impossible to finish this degree. I also need more time to read these things because a lot of my time right now is being taken up with studying and working my full time job. But I think a lot of the risk of AI is either overstated or the lay person worries about things the experts do not. In the past computer science has been done without much concern for the societal implication, thus making all computer scientists of the past mad computer scientists. This really wasn't a problem, because until the world became connected, the impact of these systems was limited to one computer. From what I can tell, most of the worry about human extinction isn't from "killer robots", it's from humans forgetting how to do certain tasks. (But I've not made it all the way through the video yet.) 6:46 -- It is true it's hard to tell what the machine will pick up on, but we do have an idea how many common AI algorithms work. Their workings are not entirely unknown, even if we can't predict the results. It's the size of the data sets for large language models that makes it impractical for humans to go through every data point. Unsurprisingly, it is much easier to predict the outcome of small machine learning models -- such as one used to predict a class's grade in a class based on the number of assignments completed, then it is one that's designed to spit out generative text using large swatch of the Internet as training data. https://youtu.be/SrPo1sGwSAc?t=602 Claude is an LLM. That means it was trying to generate a prompt based on what the user input. Keep in mind I am oversimplifying this, but this means it combs through its training data, does linear matrix algebra and statistics to do a graph theory problem to spit out the next word or sequence of words. If it was fed data that contained humans making threats, it would generate this response. At the current level of development most LLMs do not understand what their outputs mean. (https://cacm.acm.org/news/shining-a-light-on-ai-hallucinations/.) It is merely spitting out texts. It would be interesting to see if we actually could conduct a similar experiment where the service would actually be shut down at that time, but the blackmailed user backed off. In general, I trust none of Elon Musk's marketing material. The claims about Grok 4 may be true, but Elon is known for over promising the capabilities of his technologies. https://youtu.be/SrPo1sGwSAc?t=772 All the AIs I could make out on the screen are large language models. Again, the results are just the AIs doing what humans in the training data have threatened to do to others in the past. or the AI has had access to the plot of Portal. https://youtu.be/SrPo1sGwSAc This is known as the alignment problem. It concerns far more than just AIs blackmailing or murdering people. In many ways its an information hazard. (Here is a link to a description of Brian Christian's book on this topic. (An overview of the issue can be found here: https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/a-multilevel-framework-for-the-ai-alignment-problem/. I'd also recommend Brian Christian's book on the topic.) https://youtu.be/SrPo1sGwSAc Thanks for the new reading material. https://youtu.be/SrPo1sGwSAc This seems to be conflating AIs with LLMs, a specific type of AI. It is likely LLMs may have plateaued, and unless we start filtering out AI-generated content, AI systems will have only AI content to generate new content from. (https://www.nature.com/articles/s41586-024-07566-y) This seems like I'm picking on Professor Dave. I'm not. I think his call to action is correct, and his videos are always well-researched. I also am alarmed by Tech CEOs who are so eager to not regulate AI because the goal is lining their own pockets usually at great human cost. This may not result in human extinction, but will increase human misery. I am sure Professor Dave also brought up the environmental concerns surrounding large language models, but I need to take a nap, watch some grad school lectures, and go grocery shopping. I can't wait to discover what interest typos and grammatical errors I made after my nap. Hey, Dave, is there any chance you have some linear matrix algebra videos? You helped me get through Algebra and Pre-Calc, having to re-learn them in my 40s.
youtube AI Governance 2025-08-26T21:4… ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyif_sc77RlBEFmK7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugywa4LF4OBHoQOc9wd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy4jDLFrtSlYFepcOF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwiDgSRYYzRbWpOh3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwe9vBs2lYIVYbhxuZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]