Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First of all AI should replace our useless overpaid politicians. I bet it would …
ytc_UgxZUYpoC…
G
genuinely.. im not bullshitting here.
I work in healthcare reserach. I have trie…
ytc_UgxpxDhNB…
G
As for AI turn it off now .your first instinct is the right one .turn it off now…
ytc_Ugy8eF1D6…
G
i think the PC comparison is useful but there is a key difference that gets over…
rdc_oi2foic
G
It’s terrifying to learn that it’s actually scarier when AI makes mistakes, beca…
ytr_UgwibThJF…
G
The LLM of today is already untrustworthy - often telling a string of porkies. I…
ytc_UgyLwxHA4…
G
Are you Ai? You do realize it's a waymo self driving vehicle service. There is n…
ytr_UgxxMInJg…
G
There should be an accessible AI app that can detect if the image, voice, video …
ytc_UgxtIM_BO…
Comment
The video is based on a very misguided premise: An action of a human in a emergency situation like this is instinctive/impulsive, while that of a robot car is premeditated/programmed.
If programmed strategies are to be called pre-determined/planned/premeditated, then the impulsive neural pathways in a human body (which have developed over years through experiences/learning) that are responsible or taking the action should also be called pre-determined/planned/premeditated. At emergency situations like this, neither a computer can crunch moral algorithms to decide a strategy. It needs to respond using a very low-level algorithm based on the obstacle states around itself. Very much like the short-circuited neural pathways that tell a human what to do in that situation.
This is an interesting philosophical debate for those who don't understand how AI works, but does not or will not have any practical relevance.
youtube
AI Harm Incident
2015-12-08T20:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgiyupRVtlWBhngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggyEI8_YHbKA3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugi3Gjq5meodMngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghoyPd4-QvbcXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugis53FXvmFe9XgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjQeVmvXPf4K3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi_78FWydk3dngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjMGlt6fG9gKXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjGMferoLg5VXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiJzHt8WvuEHXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]