Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I did no intentional tricks and one day chatgpt suggested me to switch to a blac…
ytc_UgytwKsUR…
G
the ppl thats gon make the ai are gonna be those 4 guys who dropped out of that …
ytc_UgwgGFCmE…
G
Soooo... The electric cars and the computer/AI stuff that they told us would red…
ytc_UgwqjX8Ml…
G
Misguided in so many ways. Guessing she is one of the "geniuses" that rolled out…
ytc_UgyvfHuD6…
G
This happened in Canada and here there needs to be more than just a disclaimer w…
rdc_ks8u3sm
G
I went to public school and I did those things too. I was more into art and home…
ytr_Ugw634zwm…
G
You managed to put into words why I don't like using "AI art looks bad" as an ar…
ytc_UgxGc4rG0…
G
@jacksonsilva2881 Thank you for your comment! Well, I guess it's easier to fight…
ytr_Ugxi0T5t1…
Comment
I think it’s good practice to program AI such that they can’t actively take part in moral decisions, but only give recommendations. I get that it will br somewhat aligned to a certain moral system. I’m just saying, let’s not give AI the resources and means to act out on what should only be recommendations. This is because we can’t be sure it is alligned with what we would describe as human morals. The trolley problem is one thing, but what if we have 5 sick people who could live if the got the organs of one healthy person. Most people would agree not to overthread the healthy persons right to live even if that means we deny 5 people their right to live. This is because the right to live and right to treatment is different, and introduces even more dilemmas. An AI could choose to take the route that maximises pleasure for most, and kill the healthy person if it values this solution more.
youtube
2025-10-14T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]