Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If all it takes to fool a human about sentience is some glib confabulation generated by maximizing likelihood of word sequences in a transformer network, then it reveals more about humans than it does about language models. It shows humans are not sentient, and therefore cannot distinguish between a p-zombie and a real human. Perhaps many among us are p-zombies, and we cannot detect them. I think AI ethicists just want inflate their importance in the governments that will eventually regulate AI.
youtube AI Moral Status 2022-07-01T04:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxqoxzuB01js0FC8O54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzfjl8ZRJXFGKChvJl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQxL9nenIPyV_RuvJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyIwAhT55nGkDEkwB14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxs7KzEe5mxESnrv1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]