Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its so sad that all those incels will die of death by pissed off Robot…
ytc_UgwPu8IoS…
G
This will probably get deleted but the fear mongering by the ceos is a tactic to…
ytc_UgzkNpC5X…
G
Indict the developers... AI didn't fall out of a tree. Of course all of this is …
ytr_UgzdT-ADj…
G
Perplexing how all ppl who contribute to all this AI and hivemind tech say it ne…
ytc_UgwVRflTb…
G
To call disallowing the use of AI "classist" is insane, especially considering t…
ytc_UgxIDuqeq…
G
I'm waiting for the product-liability suit against Elon, after he said his AI ha…
ytc_Ugxr7Byh7…
G
Une 'IA associée à des robots humanoïdes performants est le rêve d'une classe di…
ytc_UgxoB9CR1…
G
stevecn70 ...your computer isn't a AI..its a Tool.....search engines in your bro…
ytr_UggNnprVp…
Comment
Why can't we apply Isaac Asimov's Three Laws of Robotics to this dilemma to create a solution in which the A.I. creates minimal harm? For instance, if the car is fully autonomous and such dilemma arises, can't the car sends out distress signal to surrounding cars to stop the traffic flow OR have the car maneuver in such a way that the car crashes to the place where there are maximal chance of survival but least harm? In any case, NO HUMAN should bear the responsibility of programming the solution but the AI should determine that by itself.
youtube
AI Harm Incident
2015-12-08T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghzuMMpBbsZkngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughhc-RnxMS1LXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjY6HJikXmw-HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghIQezVaUOb-3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgicExh_IjSyAXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjRcySEHSlNsngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugha7FPAvBu3AngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UggqSiIbUqJPI3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UghJ2QECp_kzO3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghQd27Kawk0s3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]