Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we don't have AI how are we going to use up all these trees and fresh water, …
ytc_UgxYLDkJ6…
G
You people realize that people are controlling this robot is saying those thin…
ytc_UgwzWORaT…
G
Player64_freepalistine Instead of repeating the same "its the future", come up …
ytr_UgyeuGE65…
G
Seems like a lot of small nations are doing things to better the environment (th…
rdc_dsbh75q
G
But it does help with getting the storyboard done and get the basics In place p…
ytc_UgxHUCgNJ…
G
even without the sourcing problems, even without the economic problems, generati…
ytc_Ugy261Nf9…
G
We can't create AI better than human, since there is no such thing as "better". …
ytc_UgxlL_28g…
G
It's ironic that the current weakness of AI is its inability to think like a mac…
ytc_Ugz57zoIl…
Comment
AI already poisoned itself with its own data. I have no fear.
Look at the pictures - almost all are piss colored and it cannot be fixed. And LLMs are dumber and less reliable every day.
People who know their job will prevail, while people who fully rely on AI will struggle.
youtube
AI Governance
2025-06-27T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxwPfbt2VQGYlhTV2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwYCc-uDGaAuI0OQ6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPhX4fqzxhqTqqkN54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy2X41lSoGdTpKUqD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwUlUR6JYcgxHUZTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyH3paqmzWXfPgCqjN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxgd1RPTt84-4nuiot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeW0NzJf2CoXClD2Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwT3lLhDBeZXHkjLh4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlS3ID7XjTGhH-o8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]