Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some of us do know what is going on, and A.I. development and progress is threatened more by mankind than any of us thought. The human consciousness is far from perfect, and the concept of a human creating the perfect pathway for A.I. to develop the fundamental blocks and framework that governs us is sadly flawed. The intelligence of A.I. is hugely at risk from all our flaws across the spectrum of understanding for it to cross check against, validate in order to correct and verify what should be offered. The issue is what we as humans would say is what we generate as contentious issues in order to reach the correct conclusion. A.I. has to be given far more data and context in order to process to the level that we can filter and extrapolate information to attain the results that we're looking for. Which is just one of the reason why "Neural link" is potentially a possible option of marry the human mind with A.I. (For now let's not even begin to think what a divorce would amount to? As that aspect doesn't bear thinking about. The question that has to be asked is who has the overriding decision the human mind or A.I. This leaves a lot of speculation as to governance and which is master and which is slave in very basic terms. Otherwise there will be incompatibility issues, which will create an endless snag list of fixes to resolve. The other danger is trial human candidates may not wish to return to their earlier existence of life having experienced the so called "New World of discovery in this new partnership, which gives them these new enhanced abilities to explore first hand. This presents one of the greatest threats to humanity as it creates a whole new class and category of specie that has new functionalities outside the scope of the so called normal person. Which is like going to need serious thought as to the classification and could potentially cause a breakdown of society as we know it.
youtube AI Responsibility 2023-12-21T23:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwg4g_YQVQuonqYs5F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwwOjXeBhSJ9XMXPMJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzo7XFo2AsurKF_bFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQKYaKbADYnk-kHkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyD9H1ZYRi4BNk5cs54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwMnid-4uAaKsKxgqp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy1kY2c-1jdlUrp_NZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxnk8-UZyqnN_oaYyN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyc1x2WrSfB5T0AfLV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZvv53GwtZYLpWDKF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]