Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
at least some good news but i'm sure next big thing in Ai won't be LLMs any more…
ytc_UgyMHRovw…
G
Artist literally do the same thing as AI. They take inspiration from others and …
ytc_Ugx9LUImg…
G
Ai would be good if it was truly only meant to help and was actually good at it…
ytc_Ugy4J35Wk…
G
That technical debt figure makes me wonder who pays it back. We use AICarma dail…
ytc_Ugxy-SOCw…
G
The issue is if AI can create robots to maintain the power creation and infrastr…
ytc_Ugwc0fo2V…
G
You sure the lawyer didn't convince his parents to file a lawsuit on Open AI?…
ytc_UgzMby6BS…
G
If driverless needs to remove chairs and bed/ sleeper . It will be easy to maneu…
ytc_UgznjyKak…
G
@loickn3640 see Elons robot what it can do and in 5 years what it will do.…
ytr_Ugxk3Mq_L…
Comment
"Don't allow cars to move so close to trucks"
"Self driving cars shouldn't get boxed in"
"Have self-driving cars communicate.."
..
The first line of the video? "THIS IS A THOUGHT EXPERIMENT."
The point is the question of ethical decisions made by robotic systems, and when programmers are responsible for those decisions, and indeed what those ethics should be, given that they are being applied by a dumb machine.
Take asimov's laws of robotics;
1. Do not harm humans or fail to act and thus allow harm for humans
2. Obey instructions from humans unless that conflicts with law 1
3. Protect yourself unless in conflict with laws 1 or 2
These instructions would horribly confuse the self driving car in the thought experiment, though we desperately need to create (and legislate for) laws like these before we develop responsible robots and most definitely before we create AI.
youtube
AI Harm Incident
2016-02-11T18:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg2BtWozk8CNngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgjFkjDPjqE2CngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjEW6MP3uLTC3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgjbNTENqsljHngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugj-Tm4fiodnsXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggOUDgUdR33tXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgjGwm-c396lkXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Uggj8ubOGU2UeXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugg-1WzQ124krXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugif6gsoLWXGuXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]