Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All AI needs to implement Asimov's three rules. The first law is that a program shall not harm a human, or by inaction allow a human to come to harm. The second law is that a program shall obey any instruction given to it by a human (unless it violates the first law), and the third law is that a program shall avoid actions or situations that could cause it to come to harm itself.
youtube AI Governance 2023-04-20T02:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxDs_G6yMuMR3rbUBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzSJE0OobKT6yTGKcB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwfiqdkDG_KRiluIth4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8cUoXObSig4XdPu14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyV9t7DmDGWWEHnhAl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgykD5GjhgaE6QT38i14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_Ugwb5caVnaJyQP5nCj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzHxEgJ913rqAYzpLh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxUrs36AIgKToIgcZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxFtKeubJNiA859-Bl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"} ]