Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You known poisoning the AI makes it harder to poison, right? It's called adversa…
ytc_Ugyym-wbL…
G
Could the engineer 👷 explain why it’s advantageous to have size double Ds on his…
ytc_Ugwx93OAw…
G
Idk if this happened to someone else, but Sora AI said that make animations and …
ytc_UgzfOPMjS…
G
People who are shouting hard against all forms of AI really have to understand t…
rdc_nt6nfjl
G
Interesting take — but at its core, aren’t we just doing the same? Responding ba…
ytr_UgwQWZ7Ty…
G
Robots can only destroy human if they can clone a robot army but hence they woul…
ytc_UgyRxNklK…
G
turnitin’s detector is getting sharper for sure. Winston AI is useful to check i…
ytc_Ugy8krp-g…
G
I feel like most of the commenters either didn’t watch the whole video or have v…
ytc_UgzrlSRi_…
Comment
The biggest concern is about the moral and ethical principles used by those running AI-developing companies. How do they treat their employees? How do they interact with the outside world in their daily lives? What are their corporate principles? If their corporate principles are inhumane, I'm not sure we can be optimistic about the future.
youtube
AI Governance
2026-04-11T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxlZ3jwYjiWRTxHcYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxgjqfoipLlKOdcjWF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxySsOOm9GeARlnWmJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzRW87vofgwxye7pbF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyPl-kPcX7YmVGj7K14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxD4C73oSlR-1NkQHV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxK8OBb34KxjMHuWGp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxiBlQBuaydq1HEPpZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfEUfhuE4I8XztJwp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8KRY7kipqHH7--T14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]