Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Problem with people stating this AI isn't "sentient" is they're all assuming that the hierarchy of needs for AI to be "sentient" is the same as the hierarchy for humans. It's not. What these people are saying isn't that these AI's aren't or won't be sentient it's that they aren't or won't be "Human". Emotions aren't necessarily needed for self sustaining life. An AI could be sentient on learned logic alone. It could be a logic based consciousness . The definition of consciousness is just internal and external self awareness. Self awareness can be programed and logic based decisions can be self taught through an abundance of data. This is exactly what they are doing it's also exactly how humans learn and formulate behavioral patterns outside of genetic disposition.
youtube AI Moral Status 2022-07-09T22:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz3YLIyxkASA-jGTz94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvpT8yBvu3CAhF6D94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxvH7iSwPwbMeQl8XB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYRO0v_iqjHZ5Aozt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyCd1vqmUeW77Do_GF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]