Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For all the people who are skeptical about this and talking about how we're still grappling with potholes and no healthcare, and generally saying we are exaggerating this because it's going to take a long time -- if you look at the graph of the rate of technological advancement of humanity, you'll see that the rate of technological advancement is actually exponential? Time is a finicky thing. You get lost in it. It slips past you. It's cant come soon enough. Time, like everything else, is actually pretty relative (no, it literally is). It took 2.4 million years for our ancestors to control fire and use it for cooking, but only 66 years to go from the first flight to humans landing on the moon. What we have achieved technologically in the past 10 years would have taken centuries to develop were it 20 years prior or so. Graphs on what we have achieved all show a clearly exponential, stark jump instead of a linear climb. Modern AI is far beyond what we have achieved thus far, (although the field of AI itself has been around since ~ the mid 1900s) and this will skyrocket our capabilities and achievements even more. Plenty of researchers and former STEM ex-staff of OpenAI and other tech companies working on AI are sounding the alarm and we'd best heed it. It may not be as quickly as AI2027 says it would take, but the collective fears and warnings of an existential threat of AI is probably more likely than not. If we take action to halt AI development now, or sabotage it ourselves, then it might slow the eventual existence of it down.
youtube AI Governance 2025-08-02T13:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyiIsRCrkudXgMr7Gp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXjepzVfduOsr_3cx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxF8hp9Gj97lBJEJtN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy93eG2rj6xYsUilLR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxevY9G-6y53j2bSLR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-4TZIqw1T5ig4Igd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz2vt-eImMiIuZMCf54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyorJTg3Ecx4RyOyu94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyVBclBXUPzeXxOYHJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgysaaeXHLL7NHkm_hJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]