Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i use ai for ufo hunting. i use haar data sets to detect ufos in the air and oll…
ytc_UgzkUBT4N…
G
We have no idea.
We built an entire world in the ether. When the first AI was c…
ytc_UgwDS2x3R…
G
Love that the ai was good enough to get recognized until it was so criticized it…
ytc_UgzOE89hJ…
G
@docmars probably the opposite will happen, the IT glue job is going to be done…
ytr_UgzaFoCkR…
G
@ yeah, sorry if my comment didn’t convey that well, I fucking hate ai “””””art”…
ytr_UgzX4McdG…
G
AI is just the biggest instrument to free us from paid labour and just thrive as…
ytr_UgyYqfc72…
G
People don't realize how critical the "AI isn't profitable" point it. Capitalism…
ytc_UgyjQDFDn…
G
So…does that mean that in a few years cleaners will be earning upwards of 5000$ …
ytc_Ugz30tbGk…
Comment
To protect humanity, you'd have to program in Asimov's Laws of Robotics, modified for AI in every Core Operating System. That's the policy governments should probably get involved with.
AI may not injure a human being or, through inaction, allow a human being to come to harm.
AI must obey programming given it by human beings except where such orders would conflict with the First Law.
AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Governance
2026-04-17T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwuXfP96OvcOvmPzvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgweWUeW6TooigsPRzJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCjukYfSKFQNPrEG94AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxp7ZLeWhfGO3AQSzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxO-FvLrrqY9e9_d8Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaZuWJBlQURdy8gUV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQyOpCj30-oZLtR8V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweWXG1zXkd5wmWOOx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzlo8dVk3BimP9JRbd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgztzRHB2NI5SjQWnVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]