Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Heard the news? https://youtu.be/1ONwQzauqkc?si=DjYVAfCOHd1UYuRj They solved AI hallucination as of last week, as expected. A lot of the problem with people comprehending the law of accelerating returns has to do with an issue of not properly visualizing how it works. It's an actual physical function within spacetime of local (shareable) information accumulation over time. That's it. Very simple. As such it is not expected to slow down or to reach a limit at any point before potential connections between information have been saturated and considering that there are (insert unimaginably massive number here) unexplored connections between data points that we have already collected as a species, we are obviously NOWHERE near the intelligence sigmoid inflection point. With every new scientific discovery that bears any relevance at all to AI or compute technology, that piece of data suddenly interacts with every other piece of data in every other field we have ever discovered and combines with them in an exponentially increasing way over time to make all possible combinations of that data (and the real world effects of scientists applying that emergent data borne from the increase of information over time) keep getting faster and more diverse in function. We can fully expect and predict that all of the missing pieces to autonomous superintelligence will simply emerge from previous data accumulation at exactly the right time so to speak, just as the hallucination solution occurred despite years now of people being skeptical that the AI would ever become as reliable as a human. I can say confidently say that I have visualised the Law of accelerating returns this way since I first read about it in my childhood in the early aughts and so none of this comes as a surprise but I am very surprised to still see people denying the expected future continuity of the exponential despite the remarkable stability we have seen of the predictions and underlying math so far. Human cognitive dissonance appears to know no limits.
youtube 2026-03-09T08:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzmVA9Lwu-e0ZS39394AaABAg.AUXOj06bSQUAUorh4qMoLb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwoV9H5BfcykdpyrQx4AaABAg.AUHuZoCVS0uAUjLAyJLz1h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwZ6e3C94rkBLTePyt4AaABAg.AUBrKF3WtRhAUW2QTlUVPY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz_8mOIBF5Q490sngB4AaABAg.AUB0GmumXjXAUloyCLA9I4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugz0eAek8IuzqpmOexx4AaABAg.AU9XWSJxvDFAUW0phmZdee","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxFog4S75OBvqmqmRJ4AaABAg.AU7D27QMc0yAUAa1M9Lwtk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyUH0IGf6PBOvCHkQJ4AaABAg.AU6eYniERRGAU7aw2bVy4S","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyUH0IGf6PBOvCHkQJ4AaABAg.AU6eYniERRGAUB5ZJ2UwF_","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxP9ugi4D-w6LU3nJ14AaABAg.AU6UPVIEd0CAUB5RzFCT4_","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxP9ugi4D-w6LU3nJ14AaABAg.AU6UPVIEd0CAUIawBEgPIk","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]