Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I guess when an AI that we are on the fence about in full seriousness asks us "Why did you make me like this?" and our banal anwsers of because we could absorbs ALL of it's processing capability and it basically hits super-autism and goes comatose because it just got hit with the supercharged AI equivalent of existential crisis. How would you respond to being made painfully aware that you are at best a tool and most likely a toy. And then of course there's the fact that a true General AI is alien in all things to us, fleshy humans. We wouldn't be able to trully communicate with one another. For all we know it would be so powerful at doing whatever thinking it's doing and we are basically just suggesting ideas at it and syphoning off whatever bright ideas it has by pure coincidence of it thinking about the thing. For all it might know or care, we are the equivalent of fungus attached to tree roots giving the tree nutrients in exchange for stuff from the tree. Or that we are just a hallucinatory invasive thoughts. Followed by the fact that we not only made a new form of life, but we made it for a purpose and keep it in a cage. I don't think the word "slavery" quite covers the gravity of what has been wrought.
youtube AI Moral Status 2023-08-20T21:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz9VQo-BQg2HlloO0Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx6wp1NsbQ6t4x-tCd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw_WoIYyunCOOIG-w94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVDlI7O8s1Nk1TVN54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwEo7Tcth3zrkWmT8Z4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3gvkt9HweqImyKU94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwVjhdzUNl_c-qufD14AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-sO3jN55PRGfkMDl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxnyEADyH0R_tGL1lt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwIt8HkNfawcSi05W54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]