Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was literally the best thing to put into my chatgpt, it used the word "damn…
rdc_mdjpfzr
G
I am African and a student of Computer Science. I appreciate the points brought …
ytc_Ugwjaw0dL…
G
LLM ala ChatGPT has the major issue that if you ask about something that is too …
ytc_UgxrvTheb…
G
Yes, i make robots and am deep into mechanical engineering and robotics and poss…
ytc_UgwKWcKhn…
G
Solution: the super intelligent AI figures out how to modify genetic code and ph…
ytc_UgzepJZ70…
G
All of these isn't an issue in the future when everyone has a self-driving car a…
ytc_Ugz8wJCpF…
G
Artificial intelligence CAN'T tell EMERGENCY LIGHTS and legally pull over? If AN…
ytc_Ugzq7toNy…
G
Then the AI lawyers gets made and scan the entirety of law all results to justif…
rdc_lub65p5
Comment
there are several ways to avoid accidents like this
1.-make trailers run on diferent roads where cars can not enter
2.-if the first one is not posibble (which I find it pretty easy to accomplish) make every car be no closer than 3-4 seconds apart, so if something like this happens there is chance to move to one lane, to the other or simply to slow down, and since the rule of the 3-4 seconds is still working the car on the back will stop as well as the one in the back of that one and so forth and so on
of course there will still be accidents, if something can go wrong IT WILL GO WRONG, its a law called "The fuck you law" or more commonly know as "Murphy's Law", many of these accidents can be avoided if we make research all of the car accidents that have taken place since 1950 to today and come up with rules that will save millions of lives, and with with upcoming accidents the self driving car program can be upgraded.
Now, before anyone gets angry at me, I know this is just a thought experiment, as stated in the video, "reality may not play out like our thought experiments"
youtube
AI Harm Incident
2015-12-12T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjY5ZbRHpZbl3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugg6vkyHWXADQngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgiE3qm0bdtqengCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggVGd5tRkaKZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggsLujeKwbCNngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ughj07npbLjXPngCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UggP4ePx319A6ngCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj-9FzhtV_B43gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghfJlHACEBRgHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggx47tC_oo6mXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]