Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work with AI quite a bit, though in a tangential area (simulation theory). AI as it is right now does not scale as a tech startup, as the expansion cost is not marginal as other startups, but is scaling as if it was. This means it is a genuine bubble, especially as diseconomies of scale apply (construction of new facilities in inefficient areas, requiring new water diversion systems, pollution management, etc.). The math just is not mathing here (LLMs). But physical robots (what I am working on), actually does have the capability of replacing humans in nearly all non cognitive roles that require repetition, and this is not hyperbole. My past research itself is to have robotic systems construct an internal world view of their environment to plan out various activities, just like a human would / does. Look at Boston Dynamics and their robots and the ilk. BUT this does not mean AI will lead to ruin. It will in this current economic system. But if deployed without oligarch interests and in competitive environments, it could have the same impact as the first industrial revolution depending on the margin cost of scaling.
youtube AI Jobs 2025-10-08T05:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVXi95TjrgFLEz_xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxO7C1yDUevZCAo6RV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwIaifEU8M295Luspt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyL1OrNIZ59z2KQahF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwMOWsy_QkQFb15_BV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugye0DMbVgKjqo8mNd54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz5voyMLaItHBCOnkJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwwm2Ku-nettIyJNaB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz4c0VJubK0R8lLF_l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxC84qh1h4Z-Y_svGx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]