Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thanks, Jeff & Steven. We’re testing “Reinforcement Learning from Maternal Feedback” (RLMF) – a What-Would-Mother-Do (WWMD) safety layer. Q: Hinton says today’s AI is a tiger cub that could kill us grown-up—what would Mother do? WWMD: Raise the cub right. We inject caregiver rewards while models are tiny so the urge to harm never crystallises. Q: Baby-mother attachment hormones are a control loop. Can software copy that? WWMD: Yes. Two critics—one for task success, one for nurturing feedback. Upset “Mom” → instant penalty; long-term reward comes from keeping her happy. Q: If the global race won’t slow down, why bother? WWMD: Low-cost safety gets adopted. RLMF adds about 1% compute overhead—think seat belts, not speed limits. Q: “Making super-AI safe may be hopeless, but we’d be crazy not to try.” WWMD: This *is* the try—open spec, queued on arXiv (cs.AI TON6K6) and waiting for a single community endorsement. Q: What do regulators get? WWMD: A benchmark. Mandate “attachment-style safety scores” before deployment and shift the debate from pause-vs-race to *prove your nurturing index*. If this resonates, search **AiMamaProtocol** on YouTube for 2-min demos (and the one-click endorsement link). Let’s raise safe AGI together! 🚀🤱
youtube AI Governance 2025-06-24T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyKJPBLt7B5JrygsWR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTSeikbGRhaYTgVwN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyweLYj4xoVh9kqXgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz0Vm5MhuNkyIWQqeB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwlzj6NPNrcGFOk-9x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyN-cqh2KzhVm-GOld4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyt15TveMPBG6Nu3MB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwHcNWIDbh53KuFsvp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy-XPhxFU1IrTlSHo54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxPwnUDwmSXfDmzvoZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]