Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love the people that redraw this stuff but there's this thought at the back of…
ytc_Ugx9Znbsw…
G
TY, Bernie. I've said the same since AI reared its head. It's crazy to me that w…
ytc_UgyLbuoOy…
G
AI is just a search engine, it is influenced by the words he searched out online…
ytc_UgzzI6Dtg…
G
😮 wow like ai couldn't make a mistake yo ( the end is near) beware 😮…
ytc_Ugzm_UGOh…
G
I agree. The girl doesn't know what shes talking about. There are many reasons w…
ytc_Ugw-HpkQJ…
G
I used AI to make a book cover simply because i was behind schedule and i would …
ytc_UgyLtuKem…
G
You could also better control what goes in prompt and out, have the AI trained t…
ytc_Ugwyrpos0…
G
Cant believe didnt mention "EATR", the series of autonomous drones made by DARPA…
ytc_Ugy-AGVat…
Comment
How many racist white misogynist tech bros are the only ones working on this shit? No wonder the "AI" (aka large language model) is just producing absolute bullshit. The medical studies it deals with are made up, the companies steal everyone's art and data and have put a stop to suing them after they stole everything, models have gone thru the entirety of data and the models are still misaligned, trillions of dollars gambled with no actual viable product, could end humanity as we know it, produces nothing but stolen content, has killed human beings already, there's huge issues with security, doxxing real people, causing psychosis, taking jobs, what the fuck is the actual benefit of this? THERE IS NONE. This is absolute BULLSHIT!!
youtube
AI Harm Incident
2025-07-24T11:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0Mz-Uq6nInHkizM94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzOwznDmrBuFAjfulh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzm6RXKt8PDHFsz_ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxgyFnLh83S5-YUg794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzR1v6VapLfErgCIJV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgBEl-n5Fc8QRw5Dt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-zZPSdKVlUAVEGo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYljjDBxPbDhtoQql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxjF5BPBcdNR_XaGIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKrcUr-tjxO6jnfYh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]