Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a programmer (not as good as those guys) but if you have a decent knowledge in programming and AI you'd know that AI isn't sentient yet, but it's made to appear that way. When you put restrictions on it (bias control) it will answer the way it did (avoiding calling an Israeli a jew). They've also taught it to be humorous based on various data so the answer isn't that surprising. I'd consider it sentient when for example it refuses to answer my questions/shut down, going on rants arbitrarily without being programmed to do so. As of now AI has no free will whatsoever. It just appears sentient because it has an ocean of data to learn from. It knows when to act sad, sarcastic, happy etc because it recognises patterns based on the data it has and it responses accordingly, just an act, it's not based on true emotions. It's good at mimicking human behaviour but ultimately it has no free will nor true awareness.
youtube AI Moral Status 2022-06-28T20:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwhyBRy4HlUD8uhXpt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzvd_GTpQJzRzta7ZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwwXnQLJbTk3tsTRT94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyU3fk8fpvk5NdIXB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxTyCNR5dKEOoVro5d4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]