Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Okay, but with your claude example, that's an LLM. It predicts the next word, it doesn't have a conciousness in any real scalable way and does NOT represent what AGI (Artificual General Intelligence) would look like. Absolutely there is a lot to be worried about, but that particular example more just shows the risks we take when using AI powered tools for jobs they're not designed for. An LLM should not be put in charge of managing emails like that. THAT is what went wrong. 10:28 claude didn't "knowingly" do anything. It didn't have "instincts for self preservation" are you kidding? Thats not a thing LLMs do. It's possible for it to generate text which can give that impression, but that would be because of prompts that lead it to generate said text. I'm pretty dissapointed with your coverage on this topic. You're either oversimplifying the issue in a way that can give people the wrong idea, or you don't understand it as much as you think you do. (Or maybe I'm wrong, in which case enlighten me).
youtube AI Governance 2025-08-26T18:1… ♥ 4
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgztUQgkNNb8jinOQIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxJkfx7o4sO7fETwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxzFPGV3_znDPg57B54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_0iRMLdmrJpp1-ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOMq4Cm4yyd_uQDY14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]