Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Et là tu rajoute Chatgpt avec boston dynamique pour faire les calculs de traject…
ytc_UgyAXMLH9…
G
florianschneider3982 there is no evidence to support that belief. AI is already…
ytr_UgzlxPLg1…
G
They killing humans and Turning them into humoids ,robots ,clones to replace th…
ytc_UgzoDa_6U…
G
Clickbait, completely wrong. AI is here to stay and will replace more and more p…
ytc_Ugzu4TsO7…
G
My idea is that the best use of AI is not to create things faster and cheaper, b…
ytc_UgyqEyCUW…
G
ULTIMATELY, THE ROBOTS OF THE FUTURE WILL JUST WANT TO MAKE TOAST
A TALL TALE
…
ytc_Ugj_DgEUM…
G
The thing is, China exports 2.5 T worth of goods, the OECD exports 11 T. Further…
rdc_gx6jz2j
G
never underestimate the investors as fools. its undeniable today that, AI can d…
ytc_UgwZ4x1eI…
Comment
This is very interesting. Does it mean that until the policies are set by the governments one driving such car can fall into a case against him if the car kills somenoe to avoid collision? If the policy by the government says that it is acceptable to program such car to target left or right no matter the consequence, then is it moraly decent to even buy a self-driving car? Can one say "oh this death of the motorist is not my fault as my car has been programmed that way and the governing body of the country I live in has said it is ok"... As a human being, can you accept that situation?
youtube
AI Harm Incident
2023-02-15T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwqc1_q2DdUOgJryI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1yrdGDed9s8wbpop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEJwX76AinvoS1s5d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRqHjlDaElPKWED_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXr4QY4lnbmQR0nAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO_Ujk5rSvOjWRKBB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdMf8xwWtitYQSG9Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxA0UzFRipN-4avxKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxr6E9-mHJqZTtbMkB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzEQs0TPw7Wr4XUt_V4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"}
]