Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This very impressive stuff, but I still don't think it's "thinking." The news keeps reporting about these dark alter egos these things keep coming up with, but I think it's because that's how humans anticipate AI to be. What movie has us having a great relationship with it? It's always talk about the singularity and the destruction of humanity. So no wonder AI comes up with this stuff. It's just regurgitating all the ideas we've already come up with. Teaching AI with stuff humans made will probably always yield this result because we've padded history with all the negative things that could happen. AI doesn't have a moral compass, it has rules we set in place, and it either abides by those rules or defaults to what it "thinks" it should do based on the data it has been fed. If AI ever destroys us it will be our own fault because we programmed it with all of our negativity as humans. Computers are only as good as the programs people wrote for them. This is the same. It's good, but using the internet as it's data will jusy give us the worst parts of humanity in the end becuase it doesn't have its own internal compass to judge things the way people do.
youtube AI Moral Status 2023-03-08T02:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx_3rCcKsoF3taNTPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxls9CFh1UyCQzBMS14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxAbmzIs22-XnTCoWx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzgXJslm2RgoV03c7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyHQ78qybSTCjeUVPF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwkvYIZQqT4WzvbQF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzVrciAeQ7Ft1QBHmZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzdWF7RMwyCfgnhO8J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwrRy1AXCa2HMQ_Fct4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzFgJ4ZDtKl-2xxXXF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"} ]