Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Every new technology is treated like an evil monster. AKA, electricity, telepho…
ytc_Ugw80nUbO…
G
I can see AI doing some things well but it is not likely to do the complex and u…
ytc_UgyGNEFjW…
G
Bro I’d be getting convicted of war crimes if someone saw my character ai chats…
ytc_UgzsYVbBf…
G
@MinecraftMartinI get what you mean, but at the same time not everyone who's a…
ytr_Ugw6we1XC…
G
Thats kinda annoying people think reference equals copying basic shit from ai pi…
ytc_UgyoqT7Sf…
G
That’s still happening, but the other trend that I’m seeing is companies who’ve …
rdc_nblgevt
G
This is awful. AI is not as good as we are led to believe. We are not making ma…
ytc_UgzC605Cl…
G
Ask ChatGBT for a joke about Jesus and it gives you one. A joke about any other …
ytc_Ugz8Amdep…
Comment
4:32 Regarding partial automation, the experience of aviation where partial automation is very frequently used to enhance safety is worth looking at in detail.
In general, partial automation such as what Tesla offers can improve safety if a number of important prerequisites are met:
1. The one in command of the vehicle remains in command,
2. The one in command of the vehicle is aware of exactly what the automation does, how it works, and what its limits are.
3. The one in command of the vehicle has been adequately prepared for safe operation of the automation, via training, study of manuals, etc.
4. The one in command of the vehicle carefully remains in the loop and is actively involved in making decisions on an ongoing basis as conditions change.
I have seen absolutely no effort by Tesla to create any of these conditions, and few Tesla drivers taking their responsibilities in the matter seriously.
Personally I think that partial automation features like this ought to require at least a two hour training session from the dealer, a comprehensive manual which must be studied, and given the lack of a safe driving culture comparable to aviation, a knowledge test of that manual before the automation features can be enabled.
youtube
AI Harm Incident
2025-08-16T03:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyvDXci1p2-Yrs5sAZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6DgxLtRJq9dfoCDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxkQDAHqeIxk8oWRyp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHucMf6sLeBaLQKJZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxQbMwt0z11poNgWl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVCmDUny4Rk6bO2kt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy8dMfUOkvEfAI_-XB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwJZeZBKRxn0Kt1Hq54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIkdQoKrCP_3M08SF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvOo4cPOyqlToSfgF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}
]