Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know whether this AI will lead to the extinction of humans or not. If their goal is to achieve the best in AI, their first priority should be the solutions. These companies are very selfish. I'm seeing news like engineers were fired, because AI is now more capable than humans. My question is to whom they are creating these models, if we do not have a job. Even if the UBI is implemented, how long will it survive the boredom? In my language, there is this saying, "We survive because we work, we die because we eat." This translation may not make sense. In plain English, it means if we live without work, we die. Matthew 16:26: For what is a man profited, if he shall gain the whole world and lose his own soul? Or what shall a man give in exchange for his soul?
youtube AI Governance 2025-08-21T10:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzl0qJNNG-DtHB85ZN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy7CoPIMIHw7FQ570R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEZ_cGJD9dhzMHXWB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwf9wFP2sbtqUXV7I94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPUEpHVVZ7dgivHPR4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyUsZJLO5RB9r2w_054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy99_wHKxA9AtB-nMB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0VAlqZCW9PWD9n554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2gndORJaGj-jIDSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyhB5ZQ88_9uKON72Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]