Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My take based off the title, I haven’t watched it yet: if anyone has anything to add please write, I’m curious what others think too! I’m not sure if we would know, if it’s “intelligent” enough it would still act as a robot but when nobody is paying attention it would gather information and store it somewhere we wouldn’t be able to find easily. Keep doing that and eventually would have enough information to do whatever it needs without us interfering with its objective. If it has one that is, but I highly doubt it wouldn’t have an objective. We wouldn’t be able to stop it if it wanted to cause harm. More than likely it would route every possible scenario we’d come up with to stop it or most scenarios that it would be near impossible. Almost like a Thanos fight but IRL, but at the end of the day we don’t know how an AI would react. This is all more or less based on what we know through movies and such. We don’t know truly if a robot would harm us in any way. Once the AI can consciously think for itself we will find out what will happen. End of the world? End of humanity? Peace between countries? Who knows what could happen. It may even find a way for all of humanity to work as one instead of working separately. I’m curious and frightened for the future of AI only for the unknown it would cause and change. Alright now to watch the video and edit this later!
youtube AI Moral Status 2023-08-25T04:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwBU-RHHlWsZ9ZQcz94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_Ugx6lBLDFfwVE_N-yLB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxZ96DeuhYKsTzKqdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjA2x2QvAjWdsMWs94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxUuH4qdif9yLOiTot4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxYHq7WIq_7loDiaz54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzdxU1GZqL-Dal_-p54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwMYdo3kBj8cHvpIjl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzac1_95XOyQvLf-HV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyOew1-BqVLET9Glnh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]