Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the definition of art is "the expression or application of **HUMAN** creative sk…
ytc_Ugzpu3JFD…
G
It isn't that llms have reached the level of human intelligence. It's that human…
ytr_Ugx0X51Ll…
G
AI already does this. Decision Information Systems have been around since before…
ytc_UgySJ7nwR…
G
An AI’s performance is hugely dependent on the data that the AI is trained on. I…
ytc_UgyvT7XJv…
G
Dont worry Ai Animations look still disgusting no company would use ai.
We mak…
ytc_UgyQxuJD8…
G
3:58
I hate being mean but... a guy asked me to draw some things for his game. T…
ytc_UgwGzGENA…
G
Ok I didn't go that far with asking it to ask me a question but it did ask me if…
rdc_mw7ndoa
G
We are not scared of a “cattle rebellion” because we know cows are physically re…
ytc_UgwMXFbtG…
Comment
I remember hearing a quote that went something like "Let's say you give a robot an instruction that it must never harm a human. But first you need to define 'harm', and also 'human'. We already struggle to come up with definitions that everyone would agree on, so whatever definition we teach the robot would leave some people dissatisfied." This was many years ago, before this more recent "AI boom", but it made me realise that we can't possibly expect AI to give us a satisfactory answer or performance because there is enough disagreement on the specifics of even common terms, that the whole endeavour is likely to have unintended consequences (probably not world-ending, just that AI will never live up to the hype).
youtube
AI Moral Status
2025-10-31T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqSrhcMc5eA-mHUWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRBROBATfSuxlz24B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkI7xjS9FJrr_TCDt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzGXyd0KA7vzoJuyxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxHtVxe1xBrLXLzdLB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJ_eeBMPqjd0yPWvR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxm0TtDjwAb9x039cJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJGNY8IfdyrSHiD6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxov7_kdNf5ZDujqil4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxK7_q4uQmAz4Ns--14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]