Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So basically what they saying is AI only good when everything's at your job goes…
ytc_UgxegtVOr…
G
the tesla system is at least better than the ai in cyberpunk, whats not that har…
ytc_UgyNnbvwl…
G
@smartistepicness Work used in collage isn't stolen. It's fair use. There's no s…
ytr_UgwW_oLB3…
G
Most of the AI stuff today is just rebranded deep learning. Its all marketing g…
ytc_UgwIwjV4V…
G
Corporations would love robots everywhere. Greed dissolves compassion, so if you…
ytc_Ugx9QEcjD…
G
@wandertree it wont... just as much i want it to come lets face fact... deep dow…
ytr_UgxdmOlxy…
G
LLMs turn really stupid people into slightly less stupid people and makes them f…
rdc_n0gl8ja
G
Fake. The bullet dints are already on the car as it drives in. Robot getting the…
ytc_UgxobimOP…
Comment
AI is trained on human behavior. Of course it'll do horrible shit constantly. Because so do we. The only difference is it's able to process information faster than us, and is more "condensed" in a handful of individuals. This entire video is just fearmongering, saying "we're close to the end, AI is sentient, blah blah" no it's not. It's horrible, just like the worst of humanity, but it's also stupid. Also the title is straight up factually incorrect. AI has yet to "kill" someone. These aren't even simulations, they're basically just text RP with a chatbot. You are literally overflowing this ENTIRE thing. I agree with your call for caution and stuff, but you can do that WITHOUT terrifying half a million people with complete fucking nonsense.
youtube
AI Harm Incident
2025-07-28T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwstv46UcctpZKHO4B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKtvAZ1E-2mk4HneV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzzbvmUCZ2j_YP4hB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrYUrk4FUaKs6U7k54AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy7j06WvCeMR0ARtO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzDk-uVNvUGbUrPjZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3NnzFSlT1Vd0temB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0W3vQijoN0kAFGm54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxM1EliT5C_ARl0QjZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQZ99XFGoh9CGSe8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]