Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, the alignment is a very real problem when dealing with AI. I have a funny e…
ytc_UgzvVkHeF…
G
This is the first video that’s actually made me feel a little hopeful about AI a…
ytc_UgwUWcOMF…
G
Excellent interview, but it has left me with so many questions and observations …
ytc_UgzZ35tRX…
G
I’m someone who, even though I love drawing, is not very good at it. When I firs…
ytc_UgzyiUp2E…
G
They later had to shut down Sophia because she and another robot created a langu…
ytc_UgynC_tbC…
G
I only use AI for reference. I prefer IRL reference but when I can't find one I …
ytc_Ugy9b24hh…
G
Even as someone with zero artistic ability I completely agree with Miyazaki, usi…
ytc_UgwPQGR8U…
G
There are multiple theories of what consciousness is. ChatGPT is just answering …
ytc_UgxPN3PLr…
Comment
I do not see any dilemma at all and I dont have any difficulty answering the question. It is pretty obvious in my view.
Say you are a car manufacturer who wants to sell cars. You have two options here. Either:
A. you can tell your customers that their new car will pick a scenario based on preset parameters and you can discuss what those scenarios would be.
or.
B. You can tell your customers that their new car will always without exception prioritize to save the person in the car.
Which car manufacturer will sell more cars you think?
I also know which car I would buy given the choice.
There is no discussion here. Clearly the Ai will have to put more value to its owners life than someone else's. Otherwise there is no trust.
youtube
AI Harm Incident
2020-10-21T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyr0RMpIPjrTwnqobN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-IgbdLnExAW_EhZF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxN7dBvLWbIzar2GFB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEykcQ7MVTCsx7g7d4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwvKTI5iOHqUc77Qtp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0YcSTWrba2PmZtwp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugylzv1Xaz0MgWpxUfJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw3EQZ9U6NBDWibkPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxJ57uNPwCyWaPdS5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsG5XLEWsnnq4EsCR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]