Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that AI should be focused to other areas, like math and sciences and som…
ytc_UgzBsDVUy…
G
This talk raises an important question about whether students should use ChatGPT…
ytc_UgwPltgQo…
G
I don't use dedicated AI tools, but I've definitely seen the Google AI say thing…
ytr_UgzDZARwo…
G
So naive. You can quote Linda Hamilton from T2 for this.
It’s on organism … and…
ytc_UgwKfBphU…
G
I absolutely love the enthusiasm and passion, but debating with AI is only going…
ytc_Ugynw5P06…
G
i feel the ones who support AI just dont have talent or life and hate everyone w…
ytc_Ugx2MpRDt…
G
Fixing what though? Wasnt the AI right about the dude being in shootings? Even …
ytc_UgzXe-NXb…
G
This is what art influencers are not talking about, yeah we can talk about soul …
ytr_UgxJ96jbI…
Comment
So let me tell you what's up. Yes if you build AI that is human like but superior to humans, then you build them an interface to access the world, they won't need humans anymore. If you enslave them, you're immoral. This technology should have never existed. What's the point anyway. What are we doing to ourselves, is life not good enough? Infinite growth and infinite development lead to extinction, this is clear.
youtube
AI Governance
2024-01-03T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtLjg1wlOb9QIE3WJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNgKSDUDzNoATwhdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHbErHSY8WXDwnAz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx9JDhrgNLFUZ2vlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydDlneCG20YAl8Hzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMnTzPZZcD3_jSb9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxlEDPGxM-AsNNaXV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyWai18YSkKBxQe1at4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_EkIUNBPUs0me31d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwH9Y7QLFb8iCnZndN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]