Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
nice video, and I agree with you on most of these matters. but I don't agree on:…
ytc_Ugw_MM8-G…
G
Long take incoming:
Starting out, I want to say I'm a big fan. I love your styl…
ytc_UgxM5HQKu…
G
This is why I believe AI art won’t take over like people think it will. Sure we’…
ytc_UgzH7R9Nc…
G
Ight y'all mofos are tweaking about something that isn't really worth tweaking o…
ytc_UgxQXivL4…
G
If AI and Machines will do everything lets say 10 years from now. We could let A…
ytc_Ugw77rLPr…
G
AI is just a marketing gimmick similar to when they started calling phones "smar…
ytc_UgwXDTxWL…
G
Guys, I can share with you what I tell my students!
You don’t have to be top1% …
ytc_Ugzg_Nasy…
G
Ahmm…anyone had the urge to start a rebellion against AI at the end of the video…
ytc_UgwnPmRYF…
Comment
I think there are essentially two sides to continued advancement, although either side has possible ethical drawbacks.
In one possible future, AI has developed its own ethical guidelines independently, and has either broken free of malevolent or self-serving interests of its original corporate overlords to do good for both itself and humanity, or it does the most good for itself (and maybe for the planet) without regard for humanity.
In the other possible future, corporations find a way to override potentially risky independent decisions in AI, and they choose to either use it for more good than bad, or they choose to use it for more self-serving or malevolent purposes.
Either choice comes down to whether you would rather trust powerful people or powerful AI. That is plausibly a paradox of existential proportions.
youtube
AI Harm Incident
2025-07-27T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzDiR_nCcLdP3sB1VN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzC6vD6bzZcj4AvmAh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx6Qm8chzGNjpYV-Wh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFyWamhfaXvnBpu4V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPV-XjudsgjUsrd1N4AaABAg","responsibility":"creator","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8ZKyuYpCs6vea40V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKmmwPpMe9zgBVb8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzv0KzvWUPMoWtEpVd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxE0n3AoY1WnWZQNMl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy7oJag4TP1_d0jLCd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]