Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JohnSmith-x3y8hlast to market, last in safety, last in QC, last in self driving…
ytr_UgyirUhzh…
G
Both are AI, or appear to be. It's difficult to tell though, AI advances so ofte…
ytc_UgzYxJiIP…
G
Upskill yourself instead of whining is all I can tell people. When computers wer…
ytc_UgzgVhv4j…
G
Videos like this make me think, just bring on the asteroid before we wipe oursel…
ytc_UgyazVQDL…
G
@gamersnexus I genuinely think that you should connect with Ian Carrol and other…
ytc_UgxjIgW2u…
G
What’s crazy to me is this is just a massive fuck-you to workers. Imagine taking…
ytc_Ugw91SmqI…
G
An IG doesn't steal, technically. It's public domain art, it isn't copyrighted. …
ytc_Ugzn7bT9I…
G
Sounds like the algorithm chose the ppl and the police were used te vet the popu…
ytc_UgyfgDE5b…
Comment
I'd much rather that potentially harmful AI has a slower rate of innovation in order to secure people's safety and privacy, it's a no brainer when those are the two options. You should be incredibly suspicious of any company or individual who advocates for innovation over human safety; they're essentially telling you that they would happily harm you to get their tech business off the ground. Also, why must innovation have no boundaries? I reject the presumption that in order to innovate we must also sacrifice or exploit. Great to see the EU being a world leader in this.
youtube
AI Responsibility
2024-09-21T18:3…
♥ 55
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyPPmg6Jj8H9wMtmEJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxXtCnlZM5VbqI7-3Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxckDVukAKHU2EU9Ed4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgykbPa_YSnH1gwTTyB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxdr6BHlEwdnfHDjY94AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyiilkLYnuJZEwIDRx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw87k4XykZ3ZYnVcVd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwQUpfUXsyeaixgTd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxW_1UCUKYDyRVKa7B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugz6iBAkSLmTXR14Tn94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]