Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with artificial intelligence is that all its knowledge is based on the knowledge that humanity gave it. Imagine a universe where humanity has decided that 5 + 1 = 10, and the entire universe is limited to the galaxy. Let's imagine, in this fictional universe, humanity provided all this information and much other possibly correct and incorrect information to artificial intelligence. What is the probability that artificial intelligence, in the totality of all the information it has received, will make the wrong decision? Let's fast forward to the real world. Why did humanity decide that everything we know is actually the right answer? Artificial intelligence is doomed to make mistakes and will not be able to come up with anything on its own, since it is based on the correct and incorrect information that humanity has provided it with. Imagine a child (as an example of artificial intelligence), who spends his entire life, and lives in a bunker that is filled with right and wrong answers (as an example of our world). Now the child has grown up and everyone who taught him all these years gives him a task so that he has to solve their problem outside the bunker. What is the probability that a child will be successful in solving a question if he is trained on correct and incorrect information? This is the whole problem with artificial intelligence. If artificial intelligence is not capable of finding answers on its own, based on personal experiments, but is only based on the information that humanity has provided to it, then there will be no success.
youtube AI Governance 2024-02-24T19:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyVJEhOsSLwCoj8Luh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGu5WehEhELR51FQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwus2KddX8oM1GU4op4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyyQ_bD9QBJe4fSUSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZALpmaznIwfIAtuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzoiJ686Fti3L-nSxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxGKFuDvKfgvF-TQPh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxwAj1SDsfPoTSjxTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-R7u2DdzHAIpTSil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx-8lfizm0lyNrGuKh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]