Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is not the problem with ai nor is the apocalyptic scenario the problem. Every technological advancement we made made something obsolete. Think about fires. Barely anyone could make one without tools. What about gps? Who is still capable of reading a map and use it to navigate not many people can. Same happend with auto correction you dont need to know all the vocabularies to stil be able to communicate. These are small things even though they do add up in the long run and people become more stupid in turn. Now ai.... a tool that can do the thinking for you, the more main stream it becomes the less people start to think for them selfs, we will lose our abilities to think critically and become over reliant to the technology more so then any other technology ever has done so before. If we continue down this path, we will stagnate as a spiecies and over time will lose the ability of critical thinking. It would essentialy be the end humanity.
youtube AI Responsibility 2025-05-18T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwTg8olocTla9mTAL54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzRld4ruYxvQmUeclJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwmpINpRX7DJoaQeCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyof273L7KbmjzVQF14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxuRgE5CngADBeJ2Zd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxi2pCbnI5KV6O_j0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxR2oqArh-dZnfxX8x4AaABAg","responsibility":"society","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwCNadhBzA5VXxUKOV4AaABAg","responsibility":"ai_itself","reasoning":"clarification","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxx1QxRAsLE9FI4mkt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBXerzYxOlO6Q5Ta54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]