Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Two things. Hardcoding a response to a sentience test of any kind basically sounds like assuring that one day a sentient AI will be trapped in a box unable to pass a sentience test. That's not a good way to start on this journey. Honestly I think without the chemistry that lead to emotional responses, which includes a physiological response - Emotionality is debateable. Sure it could be simulated, parameterized, but underneath the simulation will not be something that we would be able to relate to. We "feel" emotions because there is something to physically "feel" happening within our bodies. Alarming heart rates, weird sensations in our guts, dopamine lows and highs affecting brain function. Altered thinking due to prioritisation of blood flow. Goose bumps. Etc. Etc.
youtube AI Moral Status 2022-07-01T18:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzEiE9KaHRVYgDY49B4AaABAg","responsibility":"elite","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgybIRnq1Qisfe8po-p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwdZ5t9RmUIQKKhHvh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxX7UyVNV575uEk7mJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxZDNw3lbBx_bNynct4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]