Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A good way to tell between ai and human is that unless you ask it to do so, the …
ytr_Ugx9pr52c…
G
I’m thinking anything to do with children and childcare will be tough! I don’t s…
ytc_Ugy75T1Pq…
G
Hinton’s hidden grief is not just that we might lose control of AI.
It’s that we…
ytc_UgzRtJ-z9…
G
Those tech AI developers benefit from these changes need to donation and share o…
ytc_Ugxcwy6D_…
G
He was good for the Elon thing, but turned into an asshole for the AI art one…
ytc_UgwGixjOg…
G
Earlier this year an ai landing system was developed on the airbus a330 and ai w…
ytc_Ugzy-J9hR…
G
The system is breaking itself. Automation will replace labor because it is profi…
rdc_kt5ixdu
G
The advantage AI has over humans is that they have eyes behind their heads 😂…
ytc_Ugz1jvntn…
Comment
Didn't Isaac Asimov solve most of these problems in 1950 with the Three Laws of Robotics?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Governance
2024-02-01T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxkMpXwzOgJu0sdf2p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxSaXpuXYcDvcypYpV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxoH9ulG_duymgDFP54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwLZJn06iPrZWB0T9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw57Ow55EplBC_kkX14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxegNf6KdPZOMhwbjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7uQJH78vv70ptwUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhjjCTQ6ibb3ckYMh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzPFRRUaMj6BMol8G14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyzzTBGOiOKa-3QnsJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]