Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
dumb... You can add code to have the AI tell you how it comes up with it's conclusions. like when it was tested on how to tell a picture of a dog from a wolf... the one major factor was if there was snow in the back ground... it was a wolf. The Jedi thing isn't a joke... it's a default.. ask AI questions it can't answer or it would need feelings to answer ...like empathy... and it looks for information on that... ask if murder is good.. it "finds" the no...in human data... it's not feeling anything. If AI can get sentient... so could my multi function microwave... because it senses the potato steam and regulates the time... some times I fool it with a other thing and the potato button still cooks it... it knows I'm joking.
youtube AI Moral Status 2024-03-04T04:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzgjY3APQpLnXCfhAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOdK0HYhvVzZ50wgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBKCGdI9iFp6WBjjN4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXFutSDkLfk7JjQFt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHVGmw3TfIKsaKPqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]