Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an electrician, I'm happy about my future prospects serving our AI overlords …
ytc_Ugx2SK6ln…
G
The arguments are as old as the hills. There is nothing new here. It was amusing…
ytc_UgwY7wAb8…
G
AI QA agents? I was in QA for 20 years. This sounds like “unit testing” all over…
ytc_UgzHXqW-_…
G
15Kth Commented 💙 : We're back again to this video. AI - Humanoid Robots 😊…
ytc_UgxN1Fr25…
G
This scenario is highly unlikely.
First, I have yet to see an AI that can do w…
ytc_UgxjM7y0Z…
G
The world you know now that you enjoy will not be the same with AI dont let it t…
ytc_UgyA2bRbm…
G
The year was 2147, and Earth was no longer ruled by humans alone. The rise of ar…
ytc_UgxF7UXwM…
G
I completely understand you feelings and it is right that Ai should take inspira…
ytc_Ugxiezvjg…
Comment
Thank you, Yuval, for such research of risks of unleashing AI agents without a clear moral compass. I believe the AI Mama Protocol (WWMD) offers a concrete path forward. By embedding multi-tiered “mother-wisdom” guardrails—ranging from core “no harm, care first” rules to community-wisdom oversight and “what-not-to-do” quarantine layers—we can ensure every AI decision is checked against empathy, trust, and long-view ethics.
youtube
Viral AI Reaction
2025-06-22T10:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEMTnJEiWeio2t3M54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzNcNPdW6Xe6nvNmpd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzH0R8eTeFf53XPioB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWcbbHjZLx1gaBL-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWjk1D8BQDkRX61kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7ygr-H8gy3WvbycJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvlC4TeT9_s8t8WwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwguZkFQ-zolyHfsMx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUjAxNRf3r-VgtMNp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzvx7JWanjih2M-n0V4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]