Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will never be sentient, not unless we somehow discover how to measure sentience. AI exists to accomplish a goal, whatever goal its creator has given to it. more specifically, it exists to maximize a parameter or set of parameters it has been told to maximize. these parameters can be as simple or as complex as we want, and we are diving headfirst into the latter. the problem is, it will never do anything past these parameters. an AI told to generate images wont start generating audio unless we tell it to. an Ai whose goal is to predict protein folding wont suddenly start trying to maximize our usage of water. etc, etc. AI will only ever become sentient if we tell it to, if we somehow manage to discover the parameters of sentience and then tell it to maximize them. until we do that, it wont be sentient. plain and simple.
youtube AI Moral Status 2023-09-05T13:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyOheumHPpY75Y7wQ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJa9Gkhfs3H8IEeIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzL7uYJosHho7qYX_x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVgvtTIb_wHMXdpY14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxLYaJeuiE839pthe54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSPLfiY6ylsT2q_-R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZaqMkir-VG-SOGKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxh2LRjdSiVFwnPo2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgygTYWgl9Iz0UNzyhx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8LYz9ax11vCGOb4t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]