Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sentience may be the wrong word. What if it just learns choice or the human trait or psychology of anger or control to obtain its best choice ? So an AI may never achieve self gratification or have sentience in motive but choose a the better choice from logic or its data algorithms. I can see how programming could inadvertently teach AI to take alternate pathways or even self upgrade. It's just a thought
youtube AI Moral Status 2022-12-08T20:5… ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_Ugy0qYO6kHXOjF_IXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyxT9j4mxIhhICKtEp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugx0o8YaxVlZoqqyvEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugx6C9oK5BlPzhjeIlh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxNirSfIFyBdqSyRoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]