Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a software developer, I interact with AI 24/7. As the CTO, I don’t need to hire additional roles like a DevOps engineer. Instead, I successfully set up a full Kubernetes system on Google Cloud, complete with various databases, CI/CD pipelines, and automated deployment of new applications in microservices (similar to Vercel but more versatile). This entire setup can be completed in just one week and one shot. While I didn’t need to hire a DevOps engineer, I wouldn’t have been able to allocate the budget for this specific use case anyway. So, in the startup I work for, we’re doing more with less, but we live on creating new tools. On the other hand, the big companies that are already established, like a cell phone provider or a supermarket, will cut and cut roles. My opinion is, the only way to maintain job creation is by establishing new creative companies that utilize AI to develop novel products and services. However, the challenges that need to be addressed demand a higher IQ level than what is currently available in schools, enabling effective control over AI. And the low IQ level we have in the west right now, it’s correlated to our advanced in capitalism, microplastics, pollution, heavy metals, social networks, etc. it’s difficult to rise kids in this environment. Perhaps a highly intelligent superbeing would seize control and set out to rectify humanity. I believe destroying humans would be less probable because they are trained using our data. However, if it were a novel artificial entity that learns through interaction similar to humans and experiences negative experiences, treated as a slave, then the likelihood of destruction would increase.
youtube AI Governance 2025-09-04T12:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz-rEr5WdFU49rCMJp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrES66qDAyJMQ6Kc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwY21Va-HpLCkjXQQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwOFu5xTj6gX4EPeU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNRECgCGtHhVWp-cB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRcsecUdZrVQ16yHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwvXx4V_In4qiILPP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxcGo4GXzjBL3JTRQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJWQvYK2lizfXmrp54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbmMAxkkNDnKBZ4KB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]