Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Guaranteed that data center is for AI so it would do nothing of what you claim. …
rdc_oi4693k
G
What's app now has a direct line to AI. That is presently for free. I t is tot…
ytc_Ugy-NyZHu…
G
Good luck with your self driving car in a blizzard in an area with no connectivi…
ytc_UgxOqGe1-…
G
After Elizabeth Holmes did what she did, I do think these people are just.......…
ytc_UgzSfrZOW…
G
Wow, great interview. Thought experiment: If AI does become a super intelligen…
ytc_UgzUDrFH8…
G
People are upset. My workplace goes through a lot of temp workers. I was one of …
rdc_o4i8k0u
G
That's a great analogy! It's true that wisdom goes beyond knowledge and involves…
ytr_Ugz9pjOpW…
G
Maybe to prevent ai from wanting to over take humans.. Raise them like we do our…
ytc_Ugz2281LW…
Comment
I'm skeptical when AI company CEOs and developers say things like "Sure, AI development has some downsides, but the benefits will outweigh them in the end, so we should keep pushing forward." It feels like they're not genuinely worried about the negative consequences they're creating. Instead, they're basically dumping the responsibility for dealing with those problems onto politicians and the rest of society.
Are these companies actually putting their money where their mouth is? Are they donating to help people who've lost jobs to automation, or supporting communities that are being disrupted by their technology?
youtube
2025-06-07T05:1…
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwGr3gJFmjAn3cM5fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgMCVUt2G7xe2l8A54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcLY1zA-7BPxyhqp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLk3VDUn4wE1UhE2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzdx7L0GyIjh0SrG_14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMsSzdQr2BPE948ll4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxbiQVTSNlsvisN2SZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGy2yb7MN6WPrnAzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxio3XMveQozMOs9rR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"skepticism"},
{"id":"ytc_Ugzg0_odCbW5QVGo56R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]