Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How many racist white misogynist tech bros are the only ones working on this shit? No wonder the "AI" (aka large language model) is just producing absolute bullshit. The medical studies it deals with are made up, the companies steal everyone's art and data and have put a stop to suing them after they stole everything, models have gone thru the entirety of data and the models are still misaligned, trillions of dollars gambled with no actual viable product, could end humanity as we know it, produces nothing but stolen content, has killed human beings already, there's huge issues with security, doxxing real people, causing psychosis, taking jobs, what the fuck is the actual benefit of this? THERE IS NONE. This is absolute BULLSHIT!!
youtube AI Harm Incident 2025-07-24T11:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx0Mz-Uq6nInHkizM94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzOwznDmrBuFAjfulh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm6RXKt8PDHFsz_ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxgyFnLh83S5-YUg794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzR1v6VapLfErgCIJV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxgBEl-n5Fc8QRw5Dt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-zZPSdKVlUAVEGo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYljjDBxPbDhtoQql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxjF5BPBcdNR_XaGIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKrcUr-tjxO6jnfYh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]