Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When something new arives it in general is hyped! One side over estimates its purpose and works from 'Hope' the other side is concerned and can even project fear for the unknown. Both are sides of the same coin, and its fine that they balance eachother out. Pannick how ever is a bit overdone. Chatgpt systems filter information and couple it, its not real intelligence. Such systems will move up when they evolve to a saturation point! More data wont make them extreme better after some new versions. The next real step in AI will be hardware made for AI. Like Ecram chips. So Ai will come in time zones, not a straight line up. This gives us time for regulation. It is a tool treat it there for as a tool. Take a chair, we sit on it we dont use it as a key to open doors. So just have regulations, and dont give an ai system direct influence over the world and all is fine. Also, people swap often the meaning of intelligence with awareness&consciousness. That is not consistent. Intelligence is not the same as having self reflection. The latter comes from an integrated state of information. We dont have chips yet that work on entangled photons to create an integrated information state. So being concerned is good, as long as the world dont give AI all direct influence over our personal lifes and our societies. Let it do what its best at. Seeing the most functional path for solutions within any information stack(s). Things we do slower or simply not see. As for example, designing new medication with less side effects.
youtube AI Governance 2023-05-18T23:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgywEAxfuzJtJnfXDmR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlllcwusQDwHcWgm54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYUD4Kbp5R0EdbqK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkzV8diU30_FFGc654AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVNccU9WJd067yOF54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCqZDTeSwaVjKWued4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxj3JutJE1VKIrYk4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyHPdpNxyVwDnvdshZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz2vPPfHcWGEcAWg9B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyPRg65oKJ35DCj29p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]