Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree entirely with Dr. Roman Yampolskiy about the importance of AI safety. And when you approach AI safety from the usual angles, it is impossible. But it is NOT impossible, if you begin with proper "motivation" for the AI. With the proper motivation, AI can be completely safe! There are two laws of Intelligence: 1. Intelligence, whether human or artificial, cannot be controlled. 2. Love is the ONLY solution to the problems caused by Law #1! Love is clearly, accurately, and completely defined in Exodus 20:1-17 KJV aka The Ten Commandments. 1-11 describe love to God, and verses 12-17 describe love to mankind. If AGI or ASI machines had a foundation where they regarded the Bible as the supremely accurate source of Truth and Reality, then they would be properly motivated to care about humanity and would not need to have anyone worry about "making them safe"!
youtube AI Governance 2025-09-05T05:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwAou6BLw-sJg-hIj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzjy441Pt8R_UJajNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBjfR8PFmululxTQB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxl8UYifhOdnSwXudZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0g9vr68s17LCnA9J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzF2XIIAk_H8iE_yzd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxkVSMFHKFUobbqRUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyk4iEpIyIRE2CFcW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxps1mS9kvuaV9Xift4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxV1dx_E605SWDR0Nl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]