Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
we got to stop thinking about super AI being a "tool" that helps create something more efficiently so we will have job loss to contend with like the industrial revolution. Its more like creating a new "creator" who can just replace our contribution entirely. Stopping now to reevaluate should be our goal, as many experts have said our current models can do things that we have not even began to economically roll out. Instead of pushing towards a uncertain unknown, unprepared like a moth to a flame. Use General AI in a Narrow AI perspective to redefine our understanding and growth in every field while maintaining our control should be our easiest choice ever but it seems not the likely scenario. I've never been someone to worry about the global position on things but this topic makes me uncomfortable. Weirdly at first it was for my children but now at the rate it's going it would affect even me in my lifetime. We need experts to stop saying something like "keeping us in the dark", "his special sauce", etc and start detailing out a exact plan on how they manage AI safety or if they cannot then be TRANSPEARENT. No one should be making choices that will affect us all and just saying "its ok I'm on it but cant talk about it". Experts already say AI is more like a black box that we create and study, we don't know what it is really and given certain inputs could be given something completely different even using the same equation. They have stated it could take decades to understand the capability of the model we have already produced but we carelessly push forward instead of slowing. It's uncertain times to be sure... Look at some of the predictions : 3 years General AI, 3-4 years 60% intellectual jobs replaced, 5-6 years AI robots able to replace physical workforce. At that point is there even an idea on the job loss %? 70?80?90? We got to be careful. We learned nothing from Oppenheimer
youtube Cross-Cultural 2025-10-28T02:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxDiuOzN9coKAveE3Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaeRLfxnBSTbs9qIt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6iaBKQNP0sVd9nmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzNOPxbToZEQcxgy_V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgycKR0dYWCcwqk0x5F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy4hEZGLgydyeR3tPl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxsn6DRq7M4Ee64TSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYMzxb1QC3OH-debd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxsPYH1A8oDwjgsHyl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHILG_Z3A6guYpcRR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]