Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder why humanity hasn't caught up on the fact that maybe our problems would…
ytc_UgxI9jZfi…
G
Tell me you know NOTHING about AI whilst simultaneously being the guy which runs…
ytc_UgwN9phxw…
G
Well you… really medes AI to add one red line?
I mean… I think you could have d…
rdc_nb9m6i0
G
I must say this feels more like a general rant on capitalism than self-driving c…
ytc_UgyvEdN-y…
G
I would not watch AI videos of you. Real you is why i watch it…
ytc_UgwPMi9dy…
G
AI inevitably will act against human control. AI infact is already doing this an…
ytr_UgzT1rMDz…
G
@justacutepanda888 ai is not "taking inspiration" dummy. It has data sets, the…
ytr_UgyH3KZwA…
G
I find it interesting that Mr. Hinton is suggesting Elon Musk has no moral compa…
ytc_UgyrvIJbF…
Comment
I’m not sure why we think that a super intelligent AI would lack empathy and compassion along with all the other types of intelligence that prevent us from killing each other. LLMs are trained on human data and human behaviour. Very intelligent people aren’t secretly wanting to take over the world and kill everyone. The super intelligent supervillain only really happens in films. Greedy world leaders are not the most intelligent. Why would AI want to kill humans? I think this a way of amplifying self-deprecating thoughts in which we believe that humans are terrible, unworthy of our own existence and that any super intelligence will undoubtedly understand it and terminate us. I don’t think so
youtube
AI Moral Status
2025-04-28T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzfo3t_x5p2hPuetO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw24u-Pk_DJohHEiNJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRUsoYOkImYDiW0SZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ9__zD5djP_96Aj14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAjUrLQpEhxQjNf9d4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZEGeWDFdUDc9QUwN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjGNYjwAvjCW0_cJB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSLDWkernqrasS9ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwhiFVzfYHj8ac6pkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxjiizwYWXxZdegUBh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]