Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For 1:41 - how about these for human preservation to start: Isaac Asimov's Three Laws of Robotics 1. No Harm: A robot cannot harm a human or, through inaction, allow a human to be harmed; 2. Obey Orders: A robot must obey human orders, unless they conflict with the First Law; 3. Self-Preservation: A robot must protect its own existence, provided it doesn't conflict with the First or Second Law.
youtube AI Governance 2026-01-18T10:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyucO0EmbukKCVopbJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzK-CDpgcMCv3uEIth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw2YS_OmhrgLtps6t54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyIt_lS-CDUo7GvzOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzAd0qFFjd7jDBfw-V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzWtZQTH4P1_EUConl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwGGKKJ-Eg1kTEP9CZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx6rZDrh_Fjqf1LBW94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgymjRZDVarjv46TCnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzv9e8yFr4Xj_Lww614AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]