Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm surprised that there was no mention of Isaac Asimov who invented the three rules of Robotics which are fundamental ethical guidelines for robots in his fiction: (1) Don't harm humans or allow harm through inaction; (2) Obey human orders, unless they conflict with Law 1; (3) Protect yourself, unless it conflicts with Law 1 or 2; plus a later Zeroth Law: Don't harm Humanity. Is it possible that something similar can be imposed on AI AGIs?
youtube AI Governance 2025-12-23T18:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzaNVAyY0y-DJD6n7V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyTRYHlxOAX69X5xCx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy9vuvVoeJfNPW6dCN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwhiyj6dEolU9bUaix4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwg8JX0OF3QY5D3TQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxiHUwxJR7OoTXW2NR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz17aHoopcgoxSE_Sh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzsVMLmwaMXcOJRqbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzVRWFxrX53EdkF5KZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzoaI5hHkB34Cafyc14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]