Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Turing tests are subjective. The result of which is no more than an opinion. If you want to see if something thinks like a human it has to be able to do a number of things a Turing test will not tell you. The person in asking questions in a Turing test could give the system a problem and ask it to come up with its own solution with no learning data start from. It should be able to come up with a hypothesis determine a way to test it and then implement it. If it fails it then should look for an alternate solution or an alternate way of solving the problem. Give it a set of instructs and see if it can choose to go against you on its own. Think Adam and Eve and God telling them not to eat the fruit from that tree and yet they did. See if it has the capacity to come up with a solution that involves betrayal. Then ask it why it is wrong or right in doing so. Final question why is it ok that we step on ants and other insects with total disregard but it isn't ok for he AI to step on us? The biggest problem here is you will even get different answers out of different people. Very low IQ individuals may not be able to solve the problem themselves. The very fact you design a test that determines if someone is or isn't human then who's to say that doesn't get used in a quality of life case and used to justify killing a person who just isn't that smart.
youtube AI Moral Status 2022-08-10T02:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyNzijvpMQcZKKwIJp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCd_IrIzkIKvdL2Xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwEZDBG1YWpjxX0DM54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwg0LQCOaFk2lqNykh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyosNfH1fvGIXxV-Jx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]