Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@jordansandor6449 A.I. is an inflationary overused term, which annoys many, really knowledgeable experts in that area. That buzz word says absolutely nothing about the real capacity and abilities of a so called machine. A.I. could mean a simple pattern recognition program (OCR), which roots back to many decades ago or could mean a modern language model with billions of parameters. As sophisticated those machines might seem, they all lack essential abilities of human beings, which the turing test does not cover sufficiently. This test is highly controversial not only due to it's age. One very important aspect, which a.i. scientists still are trying to achieve, is causality, something even five year olds can do. And to have a better distinction for such projects, pursuing that, the term AGI has been introduced. Why is causality so important? Well it is a very essential factor for intelligence. It helps us to recognize a well written, plausible text. A language model can't do that. It takes the data, it gets. If it gets nonsense texts, it will produce nonsense content, cause it lacks the concept of cause and effect and other intellectual properties. It overtakes human errors, shortcomings, limits and even prejudices etc. in the texts without actually understanding the content. The goal in an AGI is to make the machine, to actually understand the content and to distinguish the quality. Read more about this topic from renowned scientists like Yann LeCun, Yoshua Bengio and Demis Hassabis and several other institutes around the world with high reputation. And for the "big problem of consciousness", which even goes beyond causality, we are still clueless after hundreds of years. The human brain is one of the most complex things, we know so far. Every company or person, who claims to have achieved a sentient A.I. has either fallen for an illusion or does simply tries some marketing stunt. Over a decade ago, the controversial human brain project has been started and came to a bitterly sober result. They could not achieve to simulate a human brain, even with all their financial fund. And even if you got the costly, enormous compute power, you still don't have a clue about the "software" of the human brain. Experts say, that a human brain works completly different from a computer, which means, we actually need a complete different approach.
youtube AI Moral Status 2022-06-27T00:0… ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugz-jQoRhQSvTZtC7694AaABAg.9ciWeAVpfTp9ckrNvLUU_l","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz-jQoRhQSvTZtC7694AaABAg.9ciWeAVpfTp9ckuO5xzMJu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz-jQoRhQSvTZtC7694AaABAg.9ciWeAVpfTp9ckvliMNfPl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwsNuG1WDE1s9H3sEB4AaABAg.9ciDoG4AiKX9ciMt09DkBC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxC3h8s04y2U0VbRZl4AaABAg.9chkmpPykeX9chsEDEwsmI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxC3h8s04y2U0VbRZl4AaABAg.9chkmpPykeX9ciOzL4Q7Y1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxC3h8s04y2U0VbRZl4AaABAg.9chkmpPykeX9cj4i5omb-4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz5cA-RmJ9zhty602l4AaABAg.9chIvb57AZQ9chIySVKKbE","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"ytr_UgyrJ6fVjQFAns3RrPV4AaABAg.9chCIGEStdE9ciaT47PEl0","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyrJ6fVjQFAns3RrPV4AaABAg.9chCIGEStdE9ciqlQiL9eU","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]