Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Id love to see AI pick a lock or fix a leaking tap. AI will take over the jobs t…
ytc_Ugy37_W4n…
G
People are delusional or haven't been paying attention if they believe AI will b…
ytc_UgwV1M4j8…
G
China 5years ago have implemented AI, even now students in my country still usin…
ytc_Ugy0iqn-8…
G
Imagine if skynet Is already around spamming comments and warning us in every ro…
ytr_Ugz7dD0ar…
G
Are you asking if Google still uses searches with a "-AI" parameter to train the…
rdc_n3x6nra
G
This is what I’ve been trying to explain to people there is a huge difference be…
ytc_UgwasjHap…
G
I wonder if the elites think that they can trick AI into killing all but the eli…
ytc_UgycppmLF…
G
We appreciate your perspective on artificial intelligence. In our live broadcast…
ytr_UgzSJxqPh…
Comment
nope. this video is no different from Vsauce's video on Supertasks with the same published date. These videos are not really prepared well for the intention. Automatic driving cars doesn't have to make that elaborate decision, as it will have little information and time availabl; and the outcome will be just a result of its "reaction" similar to any animal.
If we were to have a robot or Artificial Intelligence, it always have a finite amount of information and time to choose and it will make less optimal decisions than is possible. The topic only becomes relevant if someone or something can sue another for cases such as above.
youtube
AI Harm Incident
2015-12-11T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugjwkh7gbtadm3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghlKJ8Nc_INgHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UghjkWiCvWeo1ngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghilDXtRwfSCngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjN2KgJTlwlC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh54ZJdEXZfwngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjWA5kpI1F_UHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjB5N2AWV6PlHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh2zj0x13RnS3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiaFVeDpzC9U3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]