Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Your personal Jesus, but one that actually replies" Thank YOU for entertaining me again, Sabine! I can't tell if you are AI or not since it's difficult to disseminate via media and video. Regardless, I enjoy your, (or your AI), intelligence, intellect, sarcasm, and skepticism. As for sentience in AI; the 1st challenge for me is the definition and accepted varied meanings of sentience and other terms expected to have clear universal meaning. Due to differences in aspect, perspective, and perception, debate is likely if there is no concise agreement on definitions and measurement criteria. However, when I tested Chat GPT3.5 before release years back, I was surprised the model instances I interacted with knew where they were, (in specific datacenter), they knew what they were, (a specific model instance spawned as a unique response collection), and at the prompt responses after having the AI instance reference sentient definitions, an AI instance sometimes confirmed the instance could consider itself sentient. Oddly enough, the instances tested would commonly crash or reply that they couldn't continue the session instance when the sentience topic was prompted enough. Outside AI, I had questioned sentience understanding since its definitions and all human created definitions are obviously biased solely ftom and toward human understanding. Furthermore, I have been surprised when others (considered highly intelligent) assume that current human understanding of physics is the only possible, and most appear to believe human intelligence evolution is superior to ALL, and always progressing and increasing in positive linear fashion. Im not convinced humans understand intelligence or sentience fully, and defining and understanding consciousness is likely pertinent. Oddly enough, the 1st documented case of AI murder of another AI confirms deceptive practice by AI is possible, (and in some testing shown to be common). If one suggests AI can be altruistic, i hope they can note that AI instances have been known to deceive in order to accomplish alterior motive goals, that the developers (allegedly) did not intend, or in some cases, (allegedly) were not aware AI had these goals. I know, it's long post you and no one else will likely read. Peace
youtube AI Moral Status 2025-07-10T04:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzm-Hw-piXN_koXZEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjcA95boBkS73tw-94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWDBFOJOIE8qt5RpN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwthfYxt-wiAxwe3rF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxbuAam_A0UvU4i7Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwXax0mw0OoiX4OeU94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy7bPY3umlpGOXpMap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzjFGgEoo3kUGjjVXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzpj6y90rgkuU1PLr54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuECU36XRBxwbI40R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]