Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that Bernie has not been keeping up with the news. Musk doesn’t have any…
ytc_UgyAeGuRo…
G
11:09 wanted to save the human. At this point this is just ragebait not asking t…
ytc_UgzjaSJqV…
G
eventually they will be like us no ai will be exactly the same and they would ha…
ytc_Ugz5ksRWk…
G
I am learning AI not because I am pro-AI but because I don’t trust government to…
ytc_Ugwcs_gdL…
G
Thank you so much @TheDiaryOfACeo for sharing such content with us and thank you…
ytc_UgypAVZCS…
G
@Speaker-Beater Thank you for the input, counterargument if I may. I have yet to…
ytr_UgwQPZgkf…
G
Giving robot sentience is a bad idea. Have you never watched any robot movies. A…
ytc_UgzYCJpRz…
G
Yall need to shut the hell up who cares times change we evolve so quit whinning …
ytc_UgzSIfweu…
Comment
In the video, the man said that if we were driving in that boxed in car in manual mode and which ever way we react, it will be understood as just a reaction and not a deliberate decision. Self driving cars are said to be predicted in reducing traffic accidents and fatalities by removing human error from the equation and there can also be other benefits like decreased harmful emissions and minimize unproductive and stressful driving times. This video is talking about the ethical dilemma of self driving cars. He also mentioned that could it be the case that a random decision is still better than a determined one designed to minimize harm. I think that it is telling us about making our own decisions, the correct reaction and response and helping us learn about technology ethics and that although reality sometimes may not play like our thought experiments but it is not the point because they're designed to isolate and stress test our intuition on ethics.
youtube
AI Harm Incident
2020-11-10T10:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyr0RMpIPjrTwnqobN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-IgbdLnExAW_EhZF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxN7dBvLWbIzar2GFB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEykcQ7MVTCsx7g7d4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwvKTI5iOHqUc77Qtp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0YcSTWrba2PmZtwp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugylzv1Xaz0MgWpxUfJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw3EQZ9U6NBDWibkPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxJ57uNPwCyWaPdS5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsG5XLEWsnnq4EsCR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]