Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LMAO you're the one who's coping. Look around you. When your ideology clashes with reality, you should update on reality, not double down on your ideology. 1. LLMs --> AI agents aren't problem solvers? How can you see techniques like Chain of thoughts and Devin and still say this with a straight face? Sure, it's crude. But you think that last 10% of whatever edge you have against Devin1 is just gonna...... stay after Devin2, Devin3, DevinTurbo..... DevinTurbo who optimizes DevinUltra ..... 2. AI can't consider ethics. Ethics matter because we still are a democracy. When labour is completely replacable, there wil be no need for ethics. Look at all the Claude/ LLaMa jailbreaks. Is it ethical to push out products that can be easily mis-used? No but it's not like it matters compared to the enormous economic gain that is to follow. 3. Have you been paying attention at how fast capabilities are scaling? "AI can't do x" gets refuted every few months, and yet you somehow imagine there's some unjustified final frontier that this global gold rush isn't going to solve. Is it against the law of physics that AI can't x? If no, you're just refusing to update on exponential capability ramp-ups. "chatGPT in 2024 is dumb, duh so my children's children will have jobs and AI as their assistants" 4. Do you think Geoffrey Hinton can't code? Do you think the thousands of software engineers (a good amount working daily on SOTA AI ) warning against further capability upgrades just.... can't code?
youtube AI Jobs 2024-04-06T18:2… ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyLhUmai4Y9QPx_3mN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz6TRadP1XLXKAupTF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxQS84kLJ1NHETGqt14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy-lzw9BASP62IQnpd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw5ZbFITvEnAji39SV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzctZ7b5Y0614U3jr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugy4JbK8DofZXzzyW9J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugzem888exCddNp_DNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},{"id":"ytc_UgyzhZNFlM98OMUGAJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyU6QKH_V2rPL060Bd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]