Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Anonymous-8080 .. OpenAI's AI isn't anywhere close to being an AGI, rumor mills and social media loves to exaggerate. The common measure of what most agree in AGI is in relationship to human intelligence and consciousness. You can have dozens of narrow AIs in one AI It still doesn't make it an AGI. To be clear, nowhere am I saying or even remotely implying we won't have an AGI nor an ASI because we will. The only question is when. According to Elon Musk, self-driven Level 5 cars should have been a thing years ago, but here we are driving their cars 80 mph into a stopped truck. Similarly, there are narrow AIs which are light years ahead of what any human is capable of doing especially at the speeds and extreme volume the AI is able to compile data. I own a data center and while it doesn't make me a know-it-all, it gives me a pretty good idea where we're really at right now. Here's some food for thought. Our little brains are amazing little things for what they do and for how little energy they consume. The equivalent silicon right now requires its own substation to power (multiple gigawatts). Think about that one. Then next think about how that relates to any AGI.
youtube AI Governance 2023-12-03T12:5… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxLXt7IcKXx1OgWDwl4AaABAg.9wdy4Rs0GcsA4YAIFfRO3V","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxLXt7IcKXx1OgWDwl4AaABAg.9wdy4Rs0GcsA4YPzcsJ4Ke","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzEstUayZhCpajDOHh4AaABAg.9wc6Eb2Fx7l9xqpJ9y93J-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzEstUayZhCpajDOHh4AaABAg.9wc6Eb2Fx7l9yh_sGtf6JJ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwJRS2gLugovS02taJ4AaABAg.9wbofxctIx39xqIEc3gvBv","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwJRS2gLugovS02taJ4AaABAg.9wbofxctIx39xqfuHIaluG","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugx8r84v_JpAF_1bOpN4AaABAg.9w_PX3ujyL59wf_-__KUbT","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz8XXbHD-Yp5IpBHFh4AaABAg.9wYr6TnX9LO9wlGHmOB4PH","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxE39q66FlihxuyOLl4AaABAg.9wXyaW_nQ_i9xr_CTNgPRe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxE39q66FlihxuyOLl4AaABAg.9wXyaW_nQ_i9xthVI0tMPK","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]