Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He's just doing it for the publicity. Anyone that really knows whats going on and how these systems work knows it ISN'T sentient (not yet). I want more than most out there to have a sentient AI system, but this just isn't it. It merely mimics language constructs because it has seen billions and billions of examples of language constructs. So it mashes up words in similar fashion as it has seen in all of the examples fed to it. No sentience here. Therefore, he's either an idiot that should have researched this more before screaming out alarm, or he knows very well that this system is not sentient, and is looking to "up" his fame level. Either way, google should get rid of him.
youtube AI Moral Status 2022-07-03T06:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxWvOs2EXCYnLrF2wV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxpwsZZ0npnExSvvTZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzYr7ABC8UuQAHJR7Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDCEtwTchnuk1jjBx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyj-YnoRxnufYxpeeF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"amusement"} ]