Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not like social media. It's shifting the paradigm of what humans have been so far: wake up, go to work, sleep, rinse and repeat. In prehistoric times, ancient times, medieval times, and modern times, people relied on working to ground their existence. AI may change that wake-up and let AI do most of the work. We need to find something else to do during the day. We may decide to work still, but we will not have to. It will be a choice so that it will be more of a hobby. We are probably not ready. The change is coming too quickly, and our brains and institutions must answer many questions and problems. Hopefully, AI will help to smooth the revolution, but we don't know. Ultimately, capital greed reached the point that revolution may be our only hope. A lot of people are falling back to violent government regimes, and following violent leaders, so maybe AI will come to save us because capital has corrupted us to the point of no return. I understand the need to fear AI, but personally, I fear how stupid people have become more than AI. Perhaps AI is here to save us, maybe to end us. I am fine with both, as much as I hope it may actually save us. Do you really think the direction the people are going in politically and philosophically right now is good and not self-destructive? Isn't AI maybe here to help?
youtube AI Governance 2024-12-21T18:1… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyM7Xgbm37_5AM0hrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiWSjUvhMAVniM-nZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZ9SWxKW-pl9A86N54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyM6_hipuy7cjiAoJt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7YezauFgDUEJrRN94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw6Yk7EBtPS0yzC0E94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw79jog0zeJMBjRM7R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugydx848OT3e5b2jVoR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-1YyJ1WEmPHZqv454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwQoPdEBfrvHfAEKrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]