Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@jordansandor6449 A.I. is an inflationary overused term, which annoys many, real…
ytr_Ugz-jQoRh…
G
We need to ban AI droids before our world becomes a ball of fire of war…
ytc_UgzM07zCX…
G
Because that AI art is too easy to make. Art is a long process not instant noodl…
ytc_Ugwa8Q94X…
G
This was the timeline we created for a hidden AI singularity, it could happen an…
ytc_UgxOAD4qi…
G
Awesome video! But I don't see why we should go beyond stage 5. That will be the…
ytc_UgyFUVrB1…
G
Fake cause theres a channel where he asks ai to show a salad without peas but it…
ytc_UgzH3mCHJ…
G
Soon ai and these models will become very smart and take some jobs or a lot of i…
ytc_UgyAneEjV…
G
Because A.I. is anti-christ Intelligence. Isn't it Alicia? But I'll tell you rig…
ytc_UgxWRGlvt…
Comment
I have a Tesla and I absolutely love it but I went into it knowing that we are just not technologically able to do full self driving 100% flawlessly yet. This isn't just a Tesla issue, it's just a technological advancement issue. As a programmer and engineer I can tell you it is infinitely complex to implement flawlessly. There are just endless unpredictable scenarios. That we are very close is amazing to me. I wonder if the likelihood of crashing while using self-driving is actually less than a human driving but because the media loves to sensationalize the times the technology fails, it seems worse than it is? People crash and die every day all over the place but no one cares unless it is a self-driving car then it makes headlines. A quick google AI search tells me they are roughly twice as likely to have a crash but the crashes are typically much more minor. So take that for what it's worth. I still love my Tesla but I have never used the self-driving. Besides, what fun in driving a 510hp car if I let the computer drive it like a granny.
youtube
2025-12-01T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNLWo5VCbob5L0X-p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBtpXLzgI4bRKk5dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxgLY1WgooaISEQSUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwYO0yYb78CPHduc854AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzua0nvI44VLcKptdl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzogkL5HsDllHStOY54AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgwHfOOjhiFDuxKYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugww6AC1nxyd9826lYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyH4vEt6Rm9udsFHX94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxTS-xnQ3-EA1VFPTt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]