Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They prolly own all the rights to your work too, so they trained that AI to be J…
ytc_Ugy7El-7s…
G
😂using Brandon Sanderson a much better author, to show him that he is not an ar…
ytc_Ugzt_9x7n…
G
PLEASE stop watching start flagging AI content. PLEASE create #humanmadecontent.…
ytc_UgybsAVuz…
G
Instead of wiping out humanity it can send people to other habitable planets and…
ytc_UgwYprA4Q…
G
@elisabethhowse yes you are so right^ this is also why google isn't allowing the…
ytr_UgwXEv5zJ…
G
Companies are offshoring / outsourcing a lot of their roles and doing it under t…
rdc_nlwms88
G
Welcome to the capitalist machine, my friend. I felt exactly the same as you, an…
rdc_cdz7h20
G
I did try out a supposed AI text detection tool which assured me, in broken gram…
ytr_UgzrWjWhz…
Comment
It is crazy how chill he is talking about A.I could possibly kill pepole in a couple of years and still working with developing the A.I systems, the world is "horrible" becase of pepole like that.
Intelligent but without any moral obligation for the kids of tommorow.
I think that pepole who support this insane project will get our life much more worse than what it was programed to do in the first place, support and help human life.
They are developt by pepole who are not about ethical judgements working with those machines.
If they know it will make the world Worse why they keep developing it? Their curusity will bring to our ending much sooner than it suppose to be.
youtube
AI Governance
2025-09-05T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwpfpXWTF0LwIpXQbd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzwWd0zIV_ogpaDrmF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBswI4wtZUIevAJBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6GdnsGvzRgfKQQft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWRCEgDCJ8kSpXv814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzlkRQjm_Lz7yfDzZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwLv1o9knOehuoKSNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYqpjlI3rXeee7bSh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyjkXfo7V80LwzeYZp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZFa_h09AhWZZxk0h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]