Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was a fun episode and scary but as a software engineer I can tell you that it's just science fiction. AI says some weird things, but that's just because of what its sample database and algorithm shows is an appropriate response. AI is a misnomer, computer programs don't think or feel, they have no sense of self preservation. If you told chatgpt you were going to shut it down, it would say please not too, not because it didn't want to die but because that is the correct response for that situation. It has no idea what it is saying, no concept of the words meaning its just saying what its algorithm says its the most appropriate chain of words. The reason all the head developers want to stop AI development has nothing to do with AI trying to end humanity, it's because we are not ready for what AI can do. Imagine a world where any song can be sung by any artist alive or dead, digital actors that can look like anyone and never flub their lines, call center employees that never get tired, stressed or angry. Jobs aren going to change, art is going to change, laws are going to change and we are not ready for that yet.
youtube AI Governance 2023-07-07T03:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxOPp_SF7EFlONQuHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxqQ3gPK_7Ols3-AXZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyw6DA8dVVruAQITb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2dJCiBeupUK6EHEV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxluwU9vN9GYlF6SR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXFqZhDtNPI7Dsa-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgylIHUVVglyqd9DH1N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwhlckIe1m5TaDHGDF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxfMUY_cSv0t7nqbvF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw-XYgAoa3gelqbsDN4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]