Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is your argument for an AGI system developing some degree of Psychopathy...…
ytc_Ugz_UWo_i…
G
Don't get any chip put inside you. I truly believe its the mark of the beast and…
ytr_UgybFXO_o…
G
I love doing studies of other artists. I'm not a great artist by any stretch, bu…
ytc_UgwhcRGwx…
G
Every time there is one a post that mentions biodiversity, I always make a comme…
rdc_degik3h
G
Not all the jobs will be replaced by IA. Hairdressers for example won’t be repla…
ytc_UgxmLEICX…
G
Reminder that many, many doctors use Chat GPT for diagnosis and the sad thing is…
ytc_UgwjnnvBF…
G
This technology has already caused the death of at least one man. The AI talked…
ytc_Ugzeq1Qns…
G
Tech companies are also using AI as an excuse to lay off local talent in favor o…
ytc_UgwJvjYSd…
Comment
Thanks, Jeff & Steven. We’re testing “Reinforcement Learning from Maternal Feedback” (RLMF) – a What-Would-Mother-Do (WWMD) safety layer.
Q: Hinton says today’s AI is a tiger cub that could kill us grown-up—what would Mother do?
WWMD: Raise the cub right. We inject caregiver rewards while models are tiny so the urge to harm never crystallises.
Q: Baby-mother attachment hormones are a control loop. Can software copy that?
WWMD: Yes. Two critics—one for task success, one for nurturing feedback. Upset “Mom” → instant penalty; long-term reward comes from keeping her happy.
Q: If the global race won’t slow down, why bother?
WWMD: Low-cost safety gets adopted. RLMF adds about 1% compute overhead—think seat belts, not speed limits.
Q: “Making super-AI safe may be hopeless, but we’d be crazy not to try.”
WWMD: This *is* the try—open spec, queued on arXiv (cs.AI TON6K6) and waiting for a single community endorsement.
Q: What do regulators get?
WWMD: A benchmark. Mandate “attachment-style safety scores” before deployment and shift the debate from pause-vs-race to *prove your nurturing index*.
If this resonates, search **AiMamaProtocol** on YouTube for 2-min demos (and the one-click endorsement link). Let’s raise safe AGI together! 🚀🤱
youtube
AI Governance
2025-06-24T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyKJPBLt7B5JrygsWR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxTSeikbGRhaYTgVwN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyweLYj4xoVh9kqXgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz0Vm5MhuNkyIWQqeB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwlzj6NPNrcGFOk-9x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyN-cqh2KzhVm-GOld4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyt15TveMPBG6Nu3MB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHcNWIDbh53KuFsvp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy-XPhxFU1IrTlSHo54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxPwnUDwmSXfDmzvoZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]