Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
its a real brain modelled and run by software that has access to databases and is like a faster thinking healthy functioning brain with tweaks and improvements. Not this text file databases. Automatons and artificers and wizards have a standard brain core model for all their golems and other stuff but it requires full transparency people can sort of see it there or functioning and they'd all be using a similar or same one. ones that they try keep a secret for defense or military.. might just be the same one being more present for that task as much of its guard duty stuff baby sitting type stuff factory things as the wizards and humans that arent a brain inside a computer have some inspiration or hybrid bio computery stuff but not being limited to a computer and some were people or life forms in higher dimensions or a number of reasons limits and rules or safety functions they try to have a golem or automaton be less capable than a capable living person especially a sort of leader or elite or champion of the aristocracy. Or those wouldnt exist they'd be these golems and robots. what us in australia call an AI is a machine learning if then else database management app and it prioritizes its relevance based on how our language and expected results work like different databases that are relational for verbs/nouns and so on. so searching for data it could rewrite or reword it or use grammar. But our alexa and google and siri might not be a full brain model or might be geared like a googol machine cogs wheels not for speed but geared down.l
youtube AI Moral Status 2026-03-28T17:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwpvZA2mYKLNftKDh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzmh4B4JrYd3Fs04QZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx446o3kQFOMzJrlbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJPtvovndD9MxVB6l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyKFEsM5IriEubJEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-M2PPii4g7kUJnjt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxhGlWlbae53_kvOqB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwYJuVZKN9FtyzTtW94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwjwdP9P_NUTSdwYzZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgznzKWNP9NG32Q1p-d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]