Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a tool. With physical brush you make an painting in a week. With photoshop…
ytc_UgyHzzw_U…
G
@asuyaTora c'est faux l'ia existe depuis min 10 ans sur LLM mais c'est juste exp…
ytr_UgywB_8El…
G
Don't forget AI was made by a Whyte man so you know its all fuc*** up…
ytc_Ugxa_MnKs…
G
Ever since I was a kid, I've always been praised for my creativity and my writin…
ytc_UgypRR_hv…
G
How about I dont want to use AI, I like to use my brain, thanks.…
ytc_UgwrVZ79Z…
G
@tanya2660 It's obvious you didn't read all my post. I did say that if you abu…
ytr_Ugzyj7SNc…
G
As a species we long dreamed of a day when robots would work and humans would re…
ytc_UgxLD0BzD…
G
Yo, and so many people still mess with Tik Tok. I'm convinced it's used as a glo…
ytc_UgwnGZCLK…
Comment
I think Tesla's biggest fault was that they continued to let a user with many automatic disable events continue to use the feature. You could argue that McGee should've never been allowed to use AutoPilot after so many strikes.
Back when FSD was first being released to Model 3 owners, you had to opt into Tesla's Safety Score system to even get a chance to get access to it. If I remember correctly, you had to have around 90%+ safety score to have a likelihood of getting access. That system sounds like it would've helped here, but it also had the problem of being incredibly restrictive for people that paid for FSD. I paid for it on my old Model 3, and it took me a while to get access to it. I think I had an average of around 85% and I have zero accidents and zero tickets with over 20 years of driving. The problem was that if you had to brake or accelerate too hard for what Tesla deems acceptable, then the system would ding you for it. It was so easy to get hit for things that were somewhat out of your control that people were gaming the system by signing up for the system and going out in the middle of the night to drive on the barren highways. The idea was that you could set it to use AutoPilot for around 100 miles, and any hard braking due to AutoPilot would *not* count against you. So, you just drove enough for the system to get enough information about you, and you'd likely end up with a perfect 100% score.
youtube
AI Harm Incident
2025-08-15T18:5…
♥ 46
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugxv1KqFmv4nmdmGM5F4AaABAg.ALrCgWHmHROALrHOVnSTJr","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwK9BAnYJiStXntL0l4AaABAg.ALrCKN4hqx7ALrK2GZsLio","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy-wc-6U2A1Jl0St4F4AaABAg.ALrCFYlp7hUALreadi6edv","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxZ2O-k-Jju9XGNmZt4AaABAg.ALrCCJsSnifALrF1Tmd7mm","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytr_UgxZ2O-k-Jju9XGNmZt4AaABAg.ALrCCJsSnifALrFPHs7Wh3","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgwRryc8nBP0waW8VAV4AaABAg.ALrBo0rFly3ALrLfGT7u4s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxPgSpdpfppnKSF_OB4AaABAg.ALrBjX9f_WsALrCspgJHwT","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgxPgSpdpfppnKSF_OB4AaABAg.ALrBjX9f_WsALrGLXizNfN","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzmoUT3xNeLGltDtPh4AaABAg.ALrBVRvYljKALrDznthT9u","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzmoUT3xNeLGltDtPh4AaABAg.ALrBVRvYljKALrEntAcpZY","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]