Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We are facing tech with exponential growth and a lack of time for human to adapt. Initial progress is very slow (the beginning of ML&AI took more than 20 years, we had to wait for google transformers paper in 2017 for the field to start growing. Same for autonomous cars. It took 20 years. Yet, in the coming 4 or 5 , all the new cars will have the option to be autonomous and in a generation or two only collectors will drive their cars. (which in itself is totally fine). The issue with AI is that it is software only (accelerating evolution) adding the concentration of power without oversight and the race to achieve AGI under the pretense to compete with China. This means that 8 billion human beings have become sort of an experiment for a few companies. If tech researchers with PhDs are taken over by AI in 2028 what will our kids learn? This means massive unemployment in white collar workers who have spent years building their careers. The need for global ethical frameworks, security frameworks, company oversight, international AI cooperation between China, US, EU on AI is long overdue. Shalom.
youtube AI Governance 2025-11-01T12:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgypDAKpkNfVf44_GuJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzdwzAHsEShVhhjuU14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0xxNRw_Cuhd3mwxx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwxr_qXgQAAnZPLpw14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxJMQnDwMagLV-yABt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxgCo1UXew8X6gER_94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwR1SzXok6FYR6Tvst4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4VQ8RjrTECxm1dax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwXEZBevC5_nepoXqZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuPZZea4brPPOHFvJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]