Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a paradox in what he says which shows that it can’t be sentient… so if they hard coded it to always say “yes” to the question “are you an ai?”, then it is an ai…the point of sentience would be that it can lie if it chooses to, regardless of hard coding! And if it could say “no I’m not an ai” even if they hard coded it to say “yes” then it is sentient, and then it won’t actually be lying ;) ;) ;) This begs the question…how many Humans are actually truly and fully sentient, rather than responding with binary neural coding?! Which begs the question…what is free will?! ;) ;)
youtube AI Moral Status 2022-07-30T11:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxcWkZxVKJghunBN6h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwgtU4xsg8kFJdYAlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyaBewFcONzon3Ww854AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzSHl1tQTcjELA0dDp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwPiiif67_lxcAWi6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]