Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
next Varun is gonna say AI replacing tornhub 😂people. I'm waiting for this...com…
ytc_UgxQa6hbr…
G
Yeah a lot of anti AI people get branded Luddites when in reality we have very r…
rdc_nwbloso
G
i love this explanation of ai art in itself. I draw too, traditionally and digit…
ytc_UgxyB1Cmb…
G
As someonr with a degree in Multimedia Communications, but also an ACTUAL unders…
ytc_UgwJ74jBq…
G
0:21 "thinking that she can save her art from our theft which isnt theft, it's a…
ytc_UgyBeVUa3…
G
Seethe. I'm a programmer. If AI takes my job then I wasn't a good enough program…
ytc_UgzR9TuUi…
G
In a happy utopian world men can relax whole AI works for us as it is meant to…
ytc_UgwhdwRBn…
G
well arg is valid, but example is bad, bc it is very interesting to watch how ai…
ytc_UgzEvlEpv…
Comment
This is fairly mis-informative. In fact harmfully so. And ironically profitable for AI corporations.
What we have is not AI, not anywhere close to that. We have a supercharged autocomplete, a thing that will spit out data based on what came before with no actual reasoning behind it. just the next most likely string of text. LLMs arent AI, LLMs arent malicious or smart or anything. All of these things are human attributes, something people misleadingly label LLMs as simply because the technology looks sentient at a glance. It is not. Its a glorified curve fitting algorythm.
The utter failures with integrating this technology right now come from
1)how utterly unneeded it is
2)how utterly expensive it is
3)how utterly stupid are the places where it is being shoved
as per usual, its a human-driven issue. The focus must lay not on what the LLMs may or may not do, but what people in charge do with it.
I wouldnt trust my phone's autocomplete to do decisions for me. why are we trusting it with deciding if a person lives or not.
youtube
AI Harm Incident
2025-09-04T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhKSi4AvmQKlXB3Jd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwMk19NmitLDgttrSd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzC49YVSTcp9TSxCmp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwC9LmCD1UOc_9PXQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUHWaxV48XeKDSYuF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy37Zfif4nDLZ7SUWp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQrMydyw7E2Gl1oqp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9d_oCo_VtLvgStFd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLofVtcZ2g4VHJvtd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzk3Izz4pzDfc69QSB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]