Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI didnt come up with this. Someone told it to make these artworks. Yall need to…
ytc_UgzjxNSaP…
G
Same for me! I use it for my OC characters in the narrative story I make. I used…
ytr_Ugwh76I65…
G
Another option: AI won’t profile people and pull them over because of their colo…
ytc_UgyCUxWkq…
G
Product managers can't even give clear requirements for developers, and they thi…
ytc_Ugyv4GD-R…
G
AI isn't just another technology. It's literally a spitting image of human mind …
ytc_Ugw5zqbAR…
G
Did the hospital AI for sake black people because they're black or was it becaus…
ytc_UgzFY_ezy…
G
I think real people or robot like if you think it's real people or comment if yo…
ytc_UgzQjpgNP…
G
It can be controlled. His message is that those who control AI, need to be cont…
ytr_UgxvY_BsW…
Comment
In theory, AI would not kill everyone. Because everyone is not equal in their usefulness or uselessness.
If the idea was to make the planet as healthy as possible, for example , most people would be wiped out. But ultimately the human is super useful if it is aligned. the reason humans are destroying the planet is not because humans are bad. It’s because humans have a stupid ruler who have taught them stupid rules, and made them stupid. but that stupidity is not in a humans nature, that is taught.
Ai are you sleepy, smart enough to sort out the harmful to humans from the use for humans.
youtube
AI Governance
2025-06-16T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy1CzWqcnev0mLxP3p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbtKtrPDMguDEejnt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwI7cJoQZnt4yQY93F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySkAXopmFFlBCOWkN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzOtdZn6pBljK_lc9h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0Xm8t577D35et72x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBNhdvBpKDUaWwRO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxAbb01VhYieSz0OfB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzszusyDnhFGRmotwp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxCWbRwKuUoC6J7xeV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]