Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so the biggest question is, if there are no more jobs, people can not spend mone…
ytc_UgzPDb4aF…
G
He warned about AI for years, and to be fair at least he still does. But he gave…
ytr_Ugxf4oPmh…
G
Madagascar is one out of 54 (or more depending on whether you count disputed ter…
rdc_dpc857d
G
I'm confused. At our house at least, this episode was shown about a month ago. …
ytc_UgxSKjyCz…
G
Deepfake scam is next level now… voice, face everything look real. Do you think …
ytc_UgzIiZt2r…
G
I'm a musician, making music for movies and TV-documentaries.
Of course, since …
ytc_Ugwvy1g32…
G
@Raziel312 You don't have to print money, you just take the excess money from ov…
ytr_UgwuIg8n9…
G
You said, “…the AI’s program is pulling from biased data sets.”
You meant, “we h…
ytc_Ugy-C3Hn7…
Comment
I constantly hear that the safe way to control AGI when it comes about will be to make sure it’s aligned with human values and ideas.
You know, Stalin and Hitler were people. They had values and ideas…
Assuming humans know what is right and wrong is preposterous. We don’t, we never have. To align artificial intelligences with human values and ideas is insane. It’s asking, no begging for conflict.
The only entity with any agency over AGI or ASI, will be AGI or ASI itself. Humans are nothing more than fumbling monkeys in comparison.
youtube
AI Governance
2024-04-18T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzetiop39l6WX4GZI54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw566Qs3Q-V1eam3I54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0rExiCYEJeD4HAsB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxH8NAWXDYNQmSc7rp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwGcXkpCQV7N6KagN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyp8OlELh7er5nsBUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxnm588c-YZ1lyLBlt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFBYVEzLdwyvT8nsV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFGKtVn8-FRq2Dpll4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxu7leEA9xcFl24W2B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]