Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wouldn't it work to not programm an AI to refer to itself as an "I", so that it would have to answer stuff like "The calculations of chat gbt have concluded that the answer is... instead of "my Calculations have concluded that...". That way we would emidiatly know that an AI is sentient, as soon as it refers to itself as an "I", as an Entity.
youtube AI Moral Status 2023-08-20T20:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx30p2Ev9TNMT3Z9oR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzveoMMR8hn4wydbpZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxrjfs97J3IewCnt8B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyXXT7LYsmqCtRLThF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwf6_wf_AaUUWsxuEl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxIpFwLAZQeq5M-beV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyFuZrQkK_3y1NEiqd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCmn4Xeripjdep-ER4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXGwFawvBYfNJN3Bh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz0sbbL6NGXVPRfOXN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]