Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't like the word sentient. It is not that the AI consciously feels, but what is happening is instead that over time algorithms were developed with specific goals. The algorithms are intertwined with other algorithms that used train sets. As the number of intertwined algorithms grow over time, the system becomes too complicated for programmers and us to rationalize. Decisions that we would never expect are made not because the robot has a soul that drives that decision, but these decisions are deterministic based on past data the system has been fed. In other words, the ai machine would not be able to disagree with an output. The reason one may think it is sentient is because from a third person point of view, its outputs are unpredictable and are similar to sentient human actions even though all the actions are deterministic and not driven by a conscious decision.
youtube AI Moral Status 2022-06-30T05:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxB0ZMdikUbaOtJdD14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzebMT3mK6E6ghvxWx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2bZaDQDdI-WMIFlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxSSGqxQyeitcvE-_54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwR6bB6eLsXSsSe1HV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"} ]