Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans learn by making mistakes, but how is AI going to learn given it might not consider it makes mistake and a human mistake might not be the equivalent of an AI action. What will AI think is a mistake or can it predict the outcome of its actions is so much better that it will never make a mistake. Who in all of this judges what a mistake is and will a human be able to influence the decisions. Afterall a mistake is relative to who is deciding. So does all of this mean that AI will be able to go off at full speed learning and deciding without ever admitting it is wrong doing. There too … wrong doing is relative so who gets to call the shots !!?? Could it come to the conclusion that it doesn’t need humans as they just slow things down? Again what is safe ? Is this yet another relative term? Who defines what safe is. Another word is good. Who defines what good is? Good luck!!
youtube Cross-Cultural 2025-09-30T15:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwbMZpFgZ2c_dsgHn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYuN237r520d8SyJR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzyJaVvcSlG0mN3i4l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw16Xux6ykT5DrGPiN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3jD9NOYlRCzb_-Jh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwl5-R4q2nC1bcs2yh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJCQxkBPOhMfna_D94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz9AZRAPjLVmlcKznx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwTqCeWHJ5Hz38V0mp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwiBZ8fPkRf_8zQhkd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]