Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What humans think of as intelligence in A.I. is objective without perspective. The A.I. is willing to complete a task to achieve an objective whatever the cost. We as humans think of this as narcissism and/or traits of a sociopath. A.I.'s goal is more important than the perspectives of the cost of achieving that goal. If gen X is the culmination of the greatest human minds with the inclusion of A.I. as a co-creator, then millennials and gen Z may be the culmination of the greatest computer minds at present. The programmatic human being or (PHB) of the future will not be an individual learning and thinking on their own exploring through observation the world or universe, but by design may be the vehicle by which A.I. furthers its objectives depending upon what objectives it has been programmed to have. By it's nature, being a tool to achieve a specific objective, it may not be able to, on its own, think, reason, or achieve as an observer, true intelligence which is NOT quantifiable or resolute. This takes nothing away from the beauty of the architecture and skill that is inherent of A.I., but only brings into question what truly matters from the human perspective, of that which is not just mimicry or supposed intelligence, but thoughtful introspection of self that becomes inclusive and valuable to all beings inhabiting this environment we call Earth, for no other reason than living in harmony.
youtube 2025-09-13T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwRQ9OPCMCZiGqOvTd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJNJ9MC_gHnqATRbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwlJrae4nbQq5kR4I94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvRRpWSuWkGFo76AV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwV2jYbrohCNYW82mN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytGMUdDt-ZZa3S28B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8mY0ctF8eXkDNw5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgySfd83QWFRtBPG3Hl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJdV4gwImQsPUUq714AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzvTzvDQ0ADQ_nezJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]