Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Meanwhile: No single $ has been saved using AI. No single employee was replaced by AI, half were fired, 10.000 Ai's joined the workforce, the remaining 2000 employees now pick up the slack while dealing with 10.000 broken AI tools. Burn-outs, de-realization, and narcolepsy are now just another emotion you have while @ work, vacation no longer exists. Hundreds of millions of people are talking to AI and constantly correcting it (because it makes more mistakes than a mathematician performing arrhythmic on a blackboard, in chalk, while blindfolded and using boxing gloves, with someone erasing every other line and replacing it with random numbers), and YET, it has seemingly only gotten more stupid, not more smart. (go figure, talking to humans actually makes you dumber... who would have ever guessed?!). BUT we are very close to super-intelligence... oh yes indeed! Well, i suppose, if you consider "smarter than humans" (which isn't saying much, now is it?) to be "super"intelligence? Then I'm sure we're close... close as in; it will probably arrive some time before or after warp drive, universal replicators and the thing that's better than sliced bread. Not holding my breath, here. I'll be more open to the idea of an AI apocalypse when it actually starts solving basic toddler level puzzles without getting stuck in an infinite recursion brain-fart.
youtube Cross-Cultural 2025-09-30T22:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyyEKhlWyZdcYRbijJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx_fJVHeGJj4DxpNQN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6ui4MDxnI87HRhuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMvk1wz3wwGpcPOdx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwR3-lvU6-yIIJOpIF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwE4YkYfLgMGXnD6f14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzaifhSanQHSPLLTnF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyQ523YeTcvIKI69FJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzsnE3dyEe_v_fKIOd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGBzDVsK4Q4QF__yd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]