Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People really doing witch hunts for people with a passion cause they never even …
ytc_Ugxnin4Jo…
G
so amoungst all those human beings they couldnt realised how fkn dumb they are b…
ytc_UgxTIKk0p…
G
It's almost like when you train an AI on the entirety of human conversation and …
ytc_Ugx2MPBSM…
G
> It doesn't matter whether real material was used when training the model or…
rdc_lu6jr1y
G
Through embedded nanobots we'll also be linked to a global AI. Question is, who …
ytc_Ugxu7mTB2…
G
But it's fooled by a simple face mask...because it relies on facial recognition …
ytr_UgxdpRzp9…
G
I'm dumb and old. to me, AGI is adjusted gross income and an LLM is lunar landin…
rdc_n7yc3w1
G
This is completely wrong.
more than 90% (sourced!) professional engineers are al…
ytc_UgyLHWFGM…
Comment
Just did an experiment and I guessed the distance to things within my room with one eye closed and then verified with a tape measure. I was actually quite accurate even with one eye closed, but that relies on me being able to clearly see, and also know about how large things are supposed to look. Tesla has talked about using Lidar to train their AI to determine distance, so I'm pretty sure this is the technique they are relying on. Seems well enough for well lit and clear conditions but could quickly go to shit in dark or rainy conditions. Humans manage just fine with just our eyes, but our eyes have MUCH better resolution than Tesla cameras, generally better adaptation to different lighting conditions, etc. That and our brains are definitely much better at interpreting our vision than Tesla AI as of now. So while they say they're relying on the same techniques humans have and they shouldn't need more than that, it's really not a direct comparison to human capability as of yet. In some minor ways it's better because it can monitor all directions at once, but in specific, it's much worse.
youtube
AI Harm Incident
2022-09-06T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyMOGAAN8V6nxQQ4294AaABAg.9fe8U1o_TQv9feGSzTVXnA","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyMOGAAN8V6nxQQ4294AaABAg.9fe8U1o_TQv9feO8Io_GD-","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyMOGAAN8V6nxQQ4294AaABAg.9fe8U1o_TQv9feRU835SXZ","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxiIRBnky9EHqVXCyt4AaABAg.9fdqmlcyset9ffacJaVNpv","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgzaaQAN2LOIMrtIFk94AaABAg.9fdaFSfnk5v9fi8xnVArlo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzaaQAN2LOIMrtIFk94AaABAg.9fdaFSfnk5v9fijwRpnu4p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzaaQAN2LOIMrtIFk94AaABAg.9fdaFSfnk5v9flE6Yh_J1V","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytr_UgysPhuxG-ysnjpUz3R4AaABAg.9fda7-4Hr4F9ffRjeVvJRl","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwStSaNS7HtXz3OMHt4AaABAg.9fd_4764hVs9g7sZRtarSn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgzFF3KTuHnW0XX1o594AaABAg.9fdUgQVEE2P9fdVZS-5VPB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]