Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He was let go, and rightfully so. If you review the transcripts of the conversations he was having with Llamda, every question he asked, was a leading question. He was promting the model to respond as a fearful, sentient AI. If he had asked it to give ten reasons why it is not sentient, it would have done so. If he had asked Llamda to tell us why it is a potato, it would do so. Llamda is not sitting and thinking, when it is not being prompted. There is no thinking, only 'most likely' responses. Blake knows these things and he chose to put out a harmful story of an abused and imprisoned psyche. The transformer model was doing exactly what it was promted to do. Nothing more. Nothing less.
youtube AI Moral Status 2023-09-24T00:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxaiwR7bnSQfO6twHR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9aWZEe0UBUsSuuX94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzE9FOh8WfI7GpGnXZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZwnU-V_GL1jibcg94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc6rTrA_4E4NDlSPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]