Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A Turing Test is not enough to show that the AI is sentient. Searls Chinese Room Argument shows that passing the Turning Test does not mean that the system is sentient... i'm surprised that the engineer is not aware of that. Turing himself stated that the Turing Test is not enough to show that a system is really sentient... Just because something acts like a human does not mean that it is a human / is sentient. Imagine: You have a rulebook containing all correct answers to all questions: Whoever or whatever is participating in the TT using this rulebook does not have to understand anything (!) or be sentient at all, it just has to execute the rules in the book, given an input (the question), providing an output (the answer), based on the rules. To the outside viewer it seems like it acts like a human, even though it's just a dull rule executing system providing output solely based on the input and following the rules of the rulebook...
youtube AI Moral Status 2022-06-30T01:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy38R9-ggeduFxxQet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSfIdfZVhIclnHa894AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxkdLyKvIRWOV5v-yl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySapDz1fzpD35lBpd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzlMue1WSSTacCKIdB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]