Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Finally, here we are. Asimov I believe, already wrote the ten Robot Laws. #1 A Robot may not harm any human. #2 A Robot must do whatever is required to prevent a human from being harmed. #3 In a conflict between rules 1 & 2, a Robot must etc...+7 more.. (paraphrasing here! :)
youtube AI Moral Status 2026-04-13T00:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyDsy46bCZgRZJsQhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwehwtCuSI7957d8vR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxCgoRcTAutiqEZgSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzWbbrAPCIKnv-Utsx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwoVenosZ7IMe5QKMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwHUrOtdoHJncWe5R14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgxriCjuxmgOrmF8pxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxtMKeqXMHsf_kUomF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4yNu2gpDEs74h1oR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJRMna1Avs8clYCIJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]