Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Somehow, I always thought that if we live in the simulation then ... it's the one, ultimate simulation, so we're still the main , most important characters, it's the simulation build for us. The feeling that there may be 1000s of same (or slightly different) simulations is a bit sad because this means that: - our world is like a movie - person / engineer / somebody who ran these 1000s simulations , just by experience of monitoring other simulations, knows the future. He can see, key symptoms, key events that for example will result in world collapse, by 2050 for example. He / she looks at our simulation, looks at monitoring and has this facepalm: "oh, no, again they invented AI, again there is this Chat GPT. On the simulation 20543 it was named Chatbot GPT and still the world collapsed". What i mean is - by monitoring many different situations, you already kind of know the future of these simulations. It's like watching 30 romantic movies - in the middle of the movie, you're kind of able to predict the ending. This most probably means that we have no free will. It most probably means that that is a set of inputs and output. If "my character" is sad on tuesday, visits the coffee shops, orders black coffee and there are 3 cakes to choose from, if there will be always same scenario, if the same simulation will be replayed, it means that i will always pick a cheesecake for example. Sure, on the another simulation (ran with tiny different settings), the sun may be shining a bit higher over the building, and making the whole coffee shop warm colors, it'll change my mood and then my character will pick a brownie, instead of cheesecake. BUT this means, that in this particular simulation we live in - if we replay it over and over, it'll be just like a movie - same effect. no free will - we're npc doing same things in this particular simulation over and over ;ppp blah ! ;)
youtube AI Governance 2025-09-24T21:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzw91VV3WxS4NJK4xh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqOyMtM-RITcOZhLR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxaYfSSknWwESCIedR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzkMyiv1qIvHWdU_lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyj6fEmw5X77Qa2Eat4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybfdkcrGvgox5i7qV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgwObIW7eH1IfTIz1X54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgziGps0f9rZimNnIoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5KF7Lbg-Woy_PqTN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzWZZVuinbHsNXyNT14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]