Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think something I hate so much about this god-forsaken generative ai "boom" is…
ytc_UgyaeeEPK…
G
In real life, spotting that woman would've been easier as the camera is limited …
ytc_Ugy59dP3d…
G
Sounds like the uses Google has in mind, won't be to our advantage, but will be …
ytc_UgwwmdeYg…
G
LLMs will never be conscious. We dont even know what it truly means to be consci…
ytc_UgyUlDqyZ…
G
AI, like Sophia here, is designed to process information and make decisions base…
ytr_UgwJsKbye…
G
@jeremymanson1781the change was from what I'd agree is a neutral expression to …
ytr_UgwHIIj-6…
G
I think the major thing these shortsighted AI tech bros dont realize, is that if…
ytc_UgxB-3X-d…
G
As an artist ai images are just sad it’s like someone works years to master ther…
ytc_Ugy3zvGuj…
Comment
"Musk has no moral compass." If this is your conclusion based on Musk's behaviour, I certainly don't want you writing the rules for AI. Elon Musk has one of the clearest and readily observable moral compasses of anyone in public life. And it's directed toward human flourishing. He literally funded a company to make AI development public and sharable to help it develop in a healthy way within a capitalist society.
youtube
AI Governance
2025-06-30T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzSaEr6H28KJ3xKE9J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOyI94t7iDccNXb4x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrquahmL2gkgGrX3d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJagM1SdfTOmOOmXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBf7R8Fy8XIUJfG2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2Ci-Ie_zBZbmdNJ54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxqDoe-zG-z5YgMtHN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw9io2IoTX1BeCKYAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwnFfjLBx4GWtEbldV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-uWPStB8zXhmH-Uh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]