Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think humans are wise n ore moral enough to properly create safe ai.…
ytc_UgwO8t889…
G
Future AI systems will almost certainly lean on this same kind of “avoidance” be…
ytc_UgyXf8CjF…
G
What about UBI to fight the AI automation? Should we have a robotax? Although as…
ytr_UgxohdjKl…
G
AI? What a crock, the only people who'll suffer are those reliant on tech, live …
ytc_Ugz0grQ5u…
G
Bro in India , companies are investing just because of huge market and buyers.If…
ytc_UgxggSy1q…
G
Wall Street is 100% indifferent about mass-unemployment as bottom line costs of …
ytc_UgzQ3ISSA…
G
Some tried to warn about the ai, they ignore it. This is why they should listene…
ytc_UgwKwqUPz…
G
Wow, content-moderator psychosis? Due to listening to real & imagined hateful sp…
ytc_UgyM3dMAk…
Comment
Shallow psychologic and ethics descriptions made by people that have no idea of what ethics is. Starting with a woman shaped conversational automate: what is it supposed to imply?
Then you can keep the "deep thinking" of Elon Musk.
youtube
AI Governance
2024-01-14T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgvwysCOOqiBJHEyN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7RlpkQp9mj75ztYV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGv3bocHm-8qL1oZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGDFIcG55KSKJ5Q7l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXC2c7pVt18XFh5yd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTrZ-8jkwF6ihK4kl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNyiggGumQcU-HVnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNR0soGx0UH2znu-F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXiOOtFW7FrC4JBNl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4ihfRTilYJBadxth4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]