Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mentioning the ethics is important because I often see a counter argument talkin…
ytc_UgzW0PTY9…
G
It will be interesting to see how it goes. So far Robotaxi has no commercially v…
ytc_UgxDIWltd…
G
My hypothesis (or hope) is that in order to train a system better than most huma…
ytc_UgyiYJcYJ…
G
Definitely human brain is more intelligent than AI because it’s human who invent…
ytc_UgxH5DQPa…
G
But the problem is AI doesn’t have intuition like humans. Like the Soviet office…
ytc_Ugw1WdnFx…
G
The AMOUNT of people that actually think this is a REAL robot is unbelievable. …
ytc_UgxZjUQA7…
G
Thank you for sharing your perspective on wisdom and its source. The interaction…
ytr_Ugw4AQB8g…
G
AI ? nope, it's bad parenting, looking for someone to blame to feel bettter lol…
ytc_UgxXpghVi…
Comment
Automated systems in aircraft are supplementary to the pilot in command, or driver of the vehicle in this instance. Nothing will ever be a true "auto-pilot" until vehicles are operating in a completely closed system without unknowns. In other words, never.
Telsa should only be listing these systems as assistants, and not calling them "auto-pilot", because they don't fulfil the role of the pilot/driver automatically. I'm retiring from British Aerospace Engineering in the coming years and amongst my colleagues we all need facial reconstruction by virtue of the amount of facial carnage cringe induced every time we hear someone say their Tesla has "auto-pilot".
Even trains aren't completely safe when using automated systems, and they have a one track mind.
youtube
AI Harm Incident
2022-09-03T17:0…
♥ 157
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx8moUl6EJZkFS7P714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0AP_Pp3vlulMJqtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-yOEoBIQp4Qzjy1N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgymMIWpL9GSaG1-2wl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzxqt7bVMnYuRUZLd14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwUVfwXku90xKirGgJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz2DvIMOu6H14tQOeF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzreR0wjmEax0vs9254AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyN6Oth4LTS1Ixa_Bl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAiAEwArnDsRwwQCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]