Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I remember hearing a quote that went something like "Let's say you give a robot an instruction that it must never harm a human. But first you need to define 'harm', and also 'human'. We already struggle to come up with definitions that everyone would agree on, so whatever definition we teach the robot would leave some people dissatisfied." This was many years ago, before this more recent "AI boom", but it made me realise that we can't possibly expect AI to give us a satisfactory answer or performance because there is enough disagreement on the specifics of even common terms, that the whole endeavour is likely to have unintended consequences (probably not world-ending, just that AI will never live up to the hype).
youtube AI Moral Status 2025-10-31T10:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzqSrhcMc5eA-mHUWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRBROBATfSuxlz24B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwkI7xjS9FJrr_TCDt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGXyd0KA7vzoJuyxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxHtVxe1xBrLXLzdLB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJ_eeBMPqjd0yPWvR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxm0TtDjwAb9x039cJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJGNY8IfdyrSHiD6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxov7_kdNf5ZDujqil4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxK7_q4uQmAz4Ns--14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]