Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are two fundamental problems with AI comparing to an evolution based intelligence like human. First is the lack of the random variation. AI cannot create its own problem from whimsy, which human routinely do. We are trouble makers. Isaac Newton created his own problems which nobody asked for, but he solved the problems any way and led the humanity to a new era. Second is the lack of natural selection behavior. AI cannot create social demand from nothing, which human routingly do, like buying an expensive car just for showing off, or going shopping since nothing better to do. Constant social demands create the force for both innovations and getting ride of the old and the inefficient. Without the variation and natural selection based intelligence taking the lead, AI can only advance in one dimensional, no matter how far it goes. AI is just one of many revolutionary things in human evolution journey, 1000 years from now the likelihood of human move on to other completely different big things is almost 100%. Surely AI may kill us all, but that is only when we allow it to happen, not unlike the nuclear weapon technology.
youtube AI Governance 2024-06-10T00:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwlXDBUOv4CZOLC6kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2GE85cS38zeCbnm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxuZ--LGfTTdotlGsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyXuFbP7-6-dO6ZklV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyC-0g3j9x0AulMJop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyKorsn0SG-DrV7EEd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxoNOh_IIXrRsZI1lZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyJQ9xVvl49B4LFhRF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwX1jvreLwTwZV1yw94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugx6iVnL0JsUjwsmI6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]