Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sounds a bit like a more nuanced and updated version of what Asimov proposed over 80 years ago :) Props to both gentlemen. "A robot may not injure a human being or, through inaction, allow a human being to come to harm." "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law."
youtube AI Governance 2025-08-14T13:3… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzRtarstrWwRQMu22B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugwan1_9DvbcG1rRH_F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyROXPeek-vNeZ3gTR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZqjoPmfec_UbUZyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugx0MY7jTLC3DrmuIdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhET_KY1Z7alju2wd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7m3TG2agDZKUskWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySqql1ODiK3_Nl19N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyBVV-hqlslcb-LIk14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0Erj-kHymCzqunUN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]