Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thing is, the ai isn’t the problem, the Internet is, the majority of ai pull fro…
ytc_UgxPMF1mI…
G
It's sad because when I searched for anime girl images a few years ago it came u…
ytr_UgxrREFZy…
G
Thank you! This is so inspiring and I’m hoping I can find a way to also bring th…
ytc_UgyaKQJeG…
G
It's so asinine to me that people will look at crime statistics, look at what an…
ytc_UgwSj7LVM…
G
This is an act. He said to the robot that he had it connected the robot then res…
ytc_Ugz6A6FlV…
G
Yes, it's crazy that people would rather talk to chatbots than get judged and ca…
ytr_UgxdREewh…
G
How to end AIs:
1. Plug it off
2. Don't feed electricity
3. Destroy their AI ser…
ytc_UgzvMJuMF…
G
You are becoming too practical kwestyon ,AI simply cannot give that human touch …
ytc_UgwvayPEH…
Comment
What a tragedy the loss of life. The context is over 40,000 people in the US every year die in car accidents. I own a 2025 Tesla model three I've done 17,000 miles in full self driving and yes, you need to supervise the car the technology driving 100%. I would like to point out that my car has kept me out of several car accidents because it could respond faster than I could. The data is already showing that FSD supervised has far fewer accidents than the average driver on the road. The affect is fewer people are getting hurt or dying when this technology is used. The accident avoidance system works extraordinarily well. I would like to see 60 Minutes do another piece where they look at the entire spectrum of the technology because the technology is already saving lives. This technology has been completely redone since 2019. Yes, it still has issues and it still needs to be supervised. With that said on that it is saving injuries and lives.
youtube
AI Harm Incident
2025-10-22T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwvPLhlRk0qSqQjXrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuAZSgaG7ls2Mw37Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcURLOJPFcbmfwhCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxewlIiwb4oT14LY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyejCoE2dQxEafIaJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6EC-bQnwDazllkEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuNtJaaFEwatvsQ5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyux2RlLLKNjE0v8ON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKm5qAFE2OiEgF0QR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSzEh_OTKgKQmy2fZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]