Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I like the critical evaluation of Tesla’s claims and strategy. But while most people see the "false promises" and think it’s just a scam, but they completely miss the way Musk works as both a person, entrepeneur and an engineer. Musk’s whole engineering strategy is built on "ordered chaos." Look at SpaceX. What does Elon do? Blow up rockets to see if the tech works. Musk is impulsive and sets these insane, aggressive deadlines that he probably won't hit. But look at the results: it creates a massive sense of urgency that forces innovation way faster than anyone else could do. Yeah, he’s gambling with safety to an extent, but he’s envisioned that getting a 10x safer system on the road sooner will save way more lives in the long run. The new, 2026 facts people keep ignoring: The Stats: FSD Supervised is literally hitting 1 crash per ~5.3M miles now. Compare that to the US average of ~0.66M miles. The data is starting to speak for itself. Legal Status: It hasn't been forbidden by any major authority. If it were truly the "death trap" critics claim, the NHTSA would have pulled it off the road years ago. Instead, it’s about to be approved by the RDW in the Netherlands, which is one of the strictest regulators in the world. Vision-Only: Everyone laughed at the "no radar" move, but with the latest AI neural net updates (v14.3), Vision is proving to be quite good. It’s scalable and mimics how humans actually drive. History shows Musk usually delivers on the "impossible," he’s just never on time. So take his claims wit a grain of salt, but know he will get the tech right one day.
youtube 2026-04-10T13:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw6CamZip73uj3w1FZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxWWUC1jIbu6-ggbet4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwslqgi7fU2vtW1cXV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwzXu-IbqOx8EnyA1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxItVi0wv8yipQpyBd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwLMa-o_ufE2RlCPBt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw2xb40ncFvrgaPD6p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgylL4hXJYdqnyKeeg54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzCh89OdYMwf2U5Zex4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxCzZaBgd3KbSnpXy14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]