Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest danger to AI is stupid gullible users. Those who can't test the veracity current crop of AI systems' output or don't bother to ask how a system can validate its response. These systems are still pretty lousy at reasoning. They are also incapable of saying I don't know. So if you ask them for example what is the meaning of life or some other question that doesn't have an answer simply because of the limitations of language and they give you any answer other than I don't know (including questions) then you know that you are not communicating with viable intelligence. These are the easiest way you can catch them out The idiots who are impressed by ChatGpt are early indicators of the kind of trouble brewing in the early future. So the best defense against AI is education, i.e. the same old story; all those systems that rely on keeping the population stupid and unable of independent thinking. These are the kind of people who were fooled by Brexit and those who thought voting for Trump was to be beneficial to improving their quality of life. All you need to do is to consider the statistics of the uneducated and correlate them to those who voted for Brexit and/or Trump.
youtube AI Governance 2023-05-02T17:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgytUudi13LYjwolzEp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwiF7jHNBkQpsQi0Ax4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzX5oqFnUVf2WERirN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFmxXRbxS10hBsLSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxK58XdipsvmRXfbwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwoH_0CQ17KaW370Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwLy1tCoHv9nlHPYUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzuoQTy-R9DLgdNJRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzC3Ad5CgZoua0i84B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxlHMZxqdeu2AX1RE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]