Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok AI art can be good, but the thing is it's theft, the AI uses different art fr…
ytc_UgyEJ0AOp…
G
So now we're in favour of this? AI facial recognition is a disgusting invasion o…
ytc_UgzCaHoUA…
G
Did they ever actually discuss the future massive unemployment which will be cau…
ytc_Ugz-kFICx…
G
ai when it is in research phase is really exciting too watch and follow it progr…
ytc_UgxnFsfd_…
G
Claude tried to kill someone in a simulation. Still scary, but you give the imp…
ytc_UgwdeWe8l…
G
the ironic part about ai """artists""" is that they go out of their way to mock,…
ytc_Ugzcenr-y…
G
A few years ago, We laughed pictures and movie made by AI. However We cannot rec…
ytc_Ugzn3FJry…
G
First off no ai isn't taking over companies are losing more money than they're g…
ytc_UgzloJGs8…
Comment
Is it possible to code AI with a universal set of morals, like those that are foundational to all world religious faiths? They all share some list of virtues (patience, temperance, justice, empathy, etc) to adopt and a list of vices (envy, greed, apathy, hate etc) to overcome. If a LLM AI could help humans consider the consequences of a proposed action with a goal of benefiting the most people (including themselves) with the farthest reaching ripple effect, then that would improve critical thinking and decision making.
youtube
AI Governance
2025-12-26T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhPPKZY2KTp88jVPN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy8sfFtHfTKqK-gO4h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzs1TP6JhqHwmMW4gN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNobJa9Q2YPODre954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxKSwSo7KesN6UouIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYzV7CHoJr0Vq34YR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy10c9rFGpEIWyZn994AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJZoZKzRL647WxmZB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQkMsvgOPuFd9HUR14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxwntR-cQyxP8esnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]