Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In memory of Suchir Balaji PLEASE SHARE: Some thoughts on Sam Altman's notorious lies and the development of AI models in general The fact is that Sam Altman has regularly attracted attention over the past few years for his lies and attempts at deception. Anyone who values ​​the truth can verify this without much effort. In fact, this was the real reason for his temporary dismissal as CEO of OpenAI. I have regularly commented specifically on the topic of Artificial General Intelligence (AGI) over the past few months. Here is a brief summary of the facts: The only meaningful benchmark for Artificial General Intelligence (AGI) is human general intelligence, which is defined primarily by an outstanding ability to generalize. Applied to computer science, this corresponds to data efficiency. The data efficiency of Large Language Models (LLM's) is very low and fixed. This means that it can not be improved. In short: For technical reasons, transformer-based AI-models (Large Language Models like ChatGPT, Gemini, Claude, Lama, Grok etc.) will never even come close to developing anything resembling general intelligence. These are easily verifiable, irrefutable facts. Anyone who claims otherwise is either ignorant or a liar. For the further development of the various common AI models, which are all based on the same technical principle, this means that they will soon reach insurmountable limits, regardless of the effort and scaling methods. The exponential development of the past will not continue for much longer in the future. Reaching AGI is definitely impossible. With love from Germany Johannes Miertschischk @SeriousStuff42 IN MEMORY OF SUCHIR BALAJI
youtube 2025-03-20T17:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx37lhYko7N9yGrP5h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZFNguqXCxHX7Ldwx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwko5uJgwuenkCL3IR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq7XZirowSAP-ShDp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0msX4vJPc3No-HJV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWFht4TWi2Qpj3Qx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw36vTPUXrcl8ik4p94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwNOYxEjjKi67_1QWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGQvRz8MTFM1nC6w94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwJVxlE4Eqyjvk5UM94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"})