Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Man, I hate this comment section. We could literally be talking about the evolution of the human species here and there are people saying "stop this before it's too late." AI could unlock things we couldn't even imagine. The doomers could just be short-sighted. They don't necessarily got a more advanced insight than others. Many could just be riding the fear-mongering wave, who doesn't try to think deeply about this, who doesn't try to see the bigger picture. If we stop working towards AGI right now, nothing changes. We don't make progress as humanity(or we continue with the same speed), we don't go forward. In say 50 years, we may not have been able to find working solutions for the problems that we know we would be facing by then. For AGI this could turn out to be a piece of cake. That's just one example, there's million more. AGI is probably the way forward for humanity right now, nothing is holding more potential, nothing more exciting right now. If we don't do this, I really struggle to see how we will be able to make the progress we would with its help (if you've got a clue say it). The fact that some jobs will become obsolete and new ones will be found means nothing in the bigger picture.
youtube Viral AI Reaction 2025-05-01T18:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy2RPUEoM7wVujr1kZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyi-WTV4NQv3dgCT5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwWwenddMkyxN8vl1N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgyUCh3bO4qXWiCwDtJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxkxql5Pzw9m6cMLel4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyubBwRc1tz_ouCMxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx6f7Gdsn4_KE9rBq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz3EKhqoEXc3seCl594AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx3Pi8D-dR8KV0cyVB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgzuCwlUbB5vOnbaiSp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]