Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think "chatbot" style AI's are really freaky to some people, but I don't understand why the Turing Test is thought of as a valid test of sentience. The Turing Test is pretty literally a test of whether a chatbot can fool a human, and that's been pretty easy for a while (especially chatbots that are trained to simulate emotions, or be rude). Of course a chatbot trained on human content will give human answers. It's going to say "I'm afraid of death", because humans write about being afraid of death. It takes human-written inputs, and spits out an output. In other words, building a machine that takes sentences and provides responses isn't evidence of any internal state, sentience, or emotion.
youtube AI Moral Status 2022-07-26T03:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxANYKopnazfHIaKlt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw36rrkv-wtEdpojdB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxnChil_68qHl3t0RN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugww-GXge0t_r5P-1Rt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6FE0O5f3W3I95MxR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]