Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As Dr. Roman said, We cannot predict every single outcomes maybe on a surface level but not entirely, until we are capable of understanding how it works. At the same time if we start controlling our use of any technology to a certain limit it is possible that people would still have jobs and human wont become extinct. So, future may look like how we see in certain AI based sci-fi movies, where the use of AI is virutal and not physical completely. Technology may impact our ability to work till the last bit of our livelyhood if we dont understand how it works. Tech development may seem like we are growing toward better future, But we as people must bring control on how and where to use them. There are no doubts that AI and other sinthetic models could benefit humans like curing deseases like cancer or bone diseases, or regrow human parts of those who are impaled from birth or by accident etc. Also AI and any other tech used moderately and designed to accompany humans to just provide any aid to make their work better maynot grab our pockets. As I said before we see some sci-fi movies where they have sophisticated technology used by humans to enhance their ability and coexist peacefully, travelling from one world to another etc... but at the same time if these developments happen gradually instead of growing exponentially and on very needed jobs or space, it might give us humans sufficient time to grow along with it and learn more about it, which gives us oppurtunity for creating safe and stable tech force.
youtube AI Governance 2025-09-04T19:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyJe6FKqFrccM7TFEN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy75T1PqPQFNCCv3Ot4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZ9yR4j5e8hWERKhx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwWfRw7lx3ToFNZRYB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwNfSFyB7u9VwC94cl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx5M6BnC0UvlyI1rXB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQ3Y0hRm7AbjrcA3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxkRHIgpg3omg0X20F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgybyxJvtKt9q5ESqmp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw-V-4PpCRNS5lDKPx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]