Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is pretty much right-on from what I've gathered. How could an AI decide intelligently on weights and tokens, etc. for creating an AI model of *its* own? LLM's are just massive amounts of data with what's essentially search and output functions. In cases where it seems spooky and intelligent to us humans, that's because, well, it's literally been designed and "trained" to do exactly that - spit out linguistic output that's impressive to humans specifically. You could just as easily optimize AI models for anything else. Plus it still fails at pretty much anything abstract, niche, novel, or continuity-sensitive, but we just pretend that the good output that happens most of the time is significant of a brain. It's like we're so enamored by our creation, we forget that we're actually just marveling at our own cleverness and sum of existing knowledge.
youtube AI Governance 2025-08-29T06:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMOKkSTfPE6","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMOTaSOzzuY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyyTU_-ZLDYNN5NIIJ4AaABAg.AMKs3PMvMIOAMOz2scifbH","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz_Q9TMlGfz7tOvZXt4AaABAg.AMKoSxUbyD9AMMkMIbVd0g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz_Q9TMlGfz7tOvZXt4AaABAg.AMKoSxUbyD9AMNi3PAx9lx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyCYeW-0dcc1esbUXp4AaABAg.AMKmVzfKNtXAMPlr2fr_tr","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwQ2Cu0lTRVOesepF94AaABAg.AMKcaXEYhl0AMO6ApBQKrr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMKqNkEMtjV","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMKtzCj0wfH","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMNvCNIRrne","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]