Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sir Roger is exactly correct. "AI" will always be a calculator and nothing more. It will mimic intelligence without ever achieving it. People that say these machines are or ever can become intelligent are delusional and they clearly don't understand what intelligence is. These systems will only ever be as good as their programmers. And since these programmers don't understand what intelligence is they will continue to make AI systems that are highly inefficient. A proper AI system once trained should be able to run on the power of a single computer. This is the truth that the creators of these machines currently do not understand which makes them vulnerable to creators that do. All knowledge is axiom based. So for these machines to function most efficiently they have to be given the best set of axioms. That's where all of the current creators fail because they don't know what the best axioms are thrrefore their machines cannot know what those axioms are. Building machines to try and calculate the best set of axioms is the problem that they face because axioms by definition are not things that can be proven regardless of how much you calculate it. The reaction to AI in its current form is a type of hysteria induced by fantasies insanely believed and promoted by the creators of these machines.
youtube AI Moral Status 2025-08-01T22:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxjN_gumKGrAJ1xL9h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcASo9NbUUBfW8adJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwvyTZn_PiW2QBGoQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9V4WJg8ftsgkeNUd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2hYU02ejJlXmGs5h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAwWABbj1v9nC3saV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxg9ADbBD5w_lUMc-t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWiBduzu6nNffUK5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxltq-NSa_drQYzFIN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-a1sJjIO-s5tJsYN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})