Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, cuz photography and painting are fundamentally different. AI art tries to be…
ytr_UgzEI88kT…
G
I think its good that people across the divide are recognising this is a problem…
ytc_Ugwr7eg23…
G
My prediction: they will get AI going, and then AI will flop and guess what? The…
ytc_Ugxem_UeX…
G
This will happen slowly. It’s been creeping up on society for years.
Which is b…
ytc_Ugz3tcBdS…
G
I think AI art is art, it’s just shitty. No matter how beautiful it looks, it’s …
ytc_UgxuwvKH_…
G
@AITube-LiveAI Thank you so much for your kind words and thoughtful engagement!…
ytr_Ugy7Cz9WA…
G
I will say the latest AI generated images are getting better at hands and less w…
ytr_UgwvZ1mrh…
G
AI art is not real art and will never be real art - An artist…
ytc_UgwdB0nDv…
Comment
>Yea but if a driverless car loses its point of reference like a clear line in the road then it's pretty much the equivalent of a human driving blindfolded. It's way worse. Humans can adapt and think on their feet. I don't think cars can do that yet
I would say quite the opposite, actually.
Even the earliest examples of "self-driving" vehicles were/are programmed to operate within a very specific (dynamically-updated) "set of laws" that ensure the safety of the occupants, above all other considerations.
This "dynamic envelope of operation" is similar to what exists in some aircraft auto-pilot systems, where the risks are much more substantial, and the industry has invested billions over the past several decades to perfect the technology. This technology and knowledge has been fleshed-out, and adapted for use on the ground- it's new, but certainly not "uncharted territory" (pardon the puns).
In an event such as you described, an autonomous vehicle (unless it was designed negligently) would already "know" precisely what to do to continuously maximize the safety of its occupants and surrounding environment/vehicles. It would likely be pre-programmed to slow-down substantially, or stop, so-as to reacquire positioning/environmental-data. It would also do-so in as quick and safe a manner as possible- and certainly in a more predictable and tested-safe manner than any human-driver could manage.
Furthermore, with continuous Vehicle-to-Vehicle communications (standardization already in the works), cars ahead could provide additional data to help localization and guidance, or cars behind could be warned of potential trouble ahead, or that the car is stopped on or to the side of the road.
The vast-majority of externalities and "emergency events" would presumably have already been tested-for by manufacturers, and varied **tested-safe** reaction-actions fully pre-programmed. All this would happen within milliseconds, 100% of the time, night or day
reddit
AI Harm Incident
1459452665.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_d1koc14","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_d1krvhu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_d1kvh8k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_d1kmnkn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_d1ktvef","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})