Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is constant talk about AI as a threat, and whether AI is able to understand that we are worried that this in turn means that AI may hide progress from humans so as not to be turned off But if we see AI as the next step in our own evolution. Then we talk about a merger, a download, when we have passed 50, and with adult children I would have no problem doing this, for science, but mostly for my own children and their future, with all the strengths AI has, a good robot and the strengths of the human being (understanding emotions) this is our future, I am a strong believer, but believe we are given our IQ for a reason, this is our next step With the best of all, we will be prepared for space, we as biological beings. will not get far, but as downloaded in a top robot empowered by AI Then we are talking! I think our creator gave us our IQ for a reason, use it to use it for the good of everyone With our understanding of emotions, others will eg creatures that are early in their evolution There, the human part will say that we will do great damage by contact Do as was probably done to us, we were helped in secret, also vaccinated While we ourselves have seen these as abductions, a small change in our Dna some tousends yrs ago, and the modern human was born. We had help, of androids programmed to help, improve and as good as possible, stay hidden. If we ever is a treath to this planet or others, we may be terminated. But if merge is beeing done, we can live for tousends of yrs. But believing that AI and biological life will live peacefully is utopia! The difference is that we have feelings, we have a huge responsibility for AI to understand, therefore merge Svar☮♾
youtube AI Moral Status 2024-06-29T11:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzqOGsflCc5SxdAOF14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyif6Bgj9YihiULHYx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwD1ReovRUkcFaeQnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0Ul4Wr1cifCw2dpB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzat3joLUbtw2wUydh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_Hur0xiw6yRUzEoN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxvpo2WPtUr3IhFGDp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzos9pOgyuGmBcguet4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyJGIAg7HKk30RTmJ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcgcDa_F4gauOKPNp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]