Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
hate ai mam XD. also, defendingai has already ruined their own 'it doesn't work.…
ytc_UgzYkbeLg…
G
Nah, they are quite in a good position in the open llm space. Llama 3 is coming.…
rdc_kokphh4
G
I understand the hate artists have with AI, but i think is not that bad for many…
ytc_UgxB00dJQ…
G
I already make one of these drone with open source LLM. auto tracking, chasing h…
ytc_Ugzmq9yNC…
G
We appreciate your concern about AI advancements. Rest assured, here at AITube, …
ytr_UgyalRMsP…
G
What does AI garbage have to look human I don't get it demented . To me it's ju…
ytc_Ugxo9VjwX…
G
It will go faster than that. I say that at somewhere around 2040 we are fighting…
ytc_UgxlRsHNN…
G
AI is a tool, it has settings + parameters, not conscience or moral judgement; t…
ytc_UgyaMOV9P…
Comment
Great video, and a lot to think about when it comes to self driving cars. But the problem here is that the scenario in the video will never happen. Not saying that something won't randomly come into the road while a car is driving, but that the car will never be in a situation where it has to decide whether to crash into something or someone. It has breaks. I know the video said that in this situation, in won't be able to break in time, but again, that will never be the case. The car has sensors that are always on and always analyzing the road and cars around it. It will not be close enough to a truck with an unstable load that breaking in time wouldn't be an option. And the second that load comes undone and becomes a potential hazard, it will have already started slowing down.
It's an interesting concept to think about, but I feel like anytime someone comes up with a hypothetical, they forget that these cars don't have the ability to be "surprised" like a human driver would. It doesn't look around and think "everything seems alright so far, maybe I can relax now".
youtube
AI Harm Incident
2015-12-08T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}
]