Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sorry but Blake is wrong. A robot can respond with what sounds like it has feelings, thus convincing some, that it does feel, however it is a machine that has been programmed through its huge datasets, deep learning and large neural networks to offer statements that sound like they come from feelings. In fact, the machine itself, is not sentient. Its model has been programmed to respond to these questions by processing the question thru its huge datasets and neural networks, searching and filtering out the appropriate response that make it sound like a sentient(feeling) type of response. If you ask the machine the same question twice, but phrase it differently, you likely will get very possibly totally different or opposite answers depending on how well or not the model(i.e the datasets and neural networks) were programmed to give a response.
youtube AI Moral Status 2022-07-21T12:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyQaITRGZwILXEppBN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPYYLOn5cJ4vr8p814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzwaBenKgMz7BCJp5h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzvSwlN4Xyb4zUV7th4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9BEoGosiGZBRo_od4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"} ]