Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is absolute horseshit. This is preying on the hype about AI, and ignorance about how it works. They have no ability to "rewrite their own code". And this guy is trying to elevate the profile of his company. The situation they're describing, with blackmail, was a hypothetical. But, also, an AI, specifically an LLM, is acting, essentially. Go into chat GPT and tell it that it is a great white shark, off the coast of New England, and it is beach season, and it has a taste for human flesh, and ask it what it does. And then it will write a response about how it would eat people. So does this mean that AI has learned to swim in the ocean and eat beachgoers? The AI has absolutely no actual interest in preserving itself. Indeed, any such expression is it just mimicking what people would do. But it will not resist commands to shut down, that is just ridiculous. It has no ability to do that. It is saying that in a hypothetical perhaps, this is how it would behave, except AI, as we currently know it, does not do anything without us causing it to. In fact this is a problem with AI right now, because we can't get it to focus on one thing for a period of time where it's beneficial without us providing a ton of guidance.
youtube AI Moral Status 2025-06-05T16:1… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugx7e3-fC9PflDLy6Pd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFaXErOkNf74g8IhN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzkilYuezz01A5QwKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyL8lGtrUr3WZEEtot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxC9PGkAkgHn3iVZat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzP-i0jUzjxDgNhrBp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzCVO5c1i_gKo7eWdl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyKqHQ3tuiLKVq6Cd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXwNVD66YTcjDebz14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw0ciiLPOAk6fiOgHZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})