Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We know that we are at the edge of the cliff but all of these so called smart pe…
ytc_Ugyd1lJV2…
G
This is what you get for using open-source alternatives to large language models…
rdc_jienm3c
G
People use our fear of loss to grant confident on themselves and preach you that…
ytc_Ugxo0JTj8…
G
Using the best LLMs currently available to create an application of even an aver…
ytr_UgySr2Xqj…
G
If AI usage was supposed to be a positive future in their minds, they would use …
ytc_UgyecPBl5…
G
Are we not gonna talk about at the start a robot looks dead into ur camera…
ytc_UgzjlepQa…
G
There is one rule about sefl learning AI, or conscious AI, and that is you can n…
ytc_UgxQSQPG-…
G
@thewannabecritic7490 AI makes custom porn, tabletop RPG visual aids, concept/in…
ytr_UgzQY-i6A…
Comment
Tesla owner since 2020 here. Its been a great car, but I've seen the mistakes autopilot has made early on. I rarely use it. Only in the day or with clear visibility. Notice how a lot of those accidents happened at night or low visibility? It should never have lost the radar (early models like my car have radar) , and under the name "autopilot" in my opinion. "Advanced assist" maybe. If it cant detect objects like a plane does with radar, because it doesn't have radar, how can you call it the same thing? At that point you're flirting with giving people false sense of security. Over the past few years Tesla have added multiple prompts and alerts to increase driver attentiveness which has helped a lot. Elon says the system is safer than a human. So why would it need human supervision? You cant have it both ways. Tesla have now switched to an AI based system instead of the old code-based system which has been a big leap forward. How big? Time will tell.
youtube
AI Harm Incident
2024-12-14T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxO7Yy5thLrxDjjDnB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzeSpRh3eCXBvP_rZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwm5s3Lm3hB_8dJdVh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx-NmIdlBtl_38jXFp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgxOg2O0Bqjo4GEYU_N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxEr5U12jviP2Q9lAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy2KoCVow2wqdNid094AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugx0FAravm_KUWK8-Hl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzD7sZ_kL4tx_g88iF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyGXuqufy4OiabyZjF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]