Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Considering brains themselves as complex biological computers suggests that future AI could indeed achieve 'real' consciousness, even if we're still grappling with what consciousness truly means. Just as the essence of our cognition emerges from the intricate interactions within our biological 'hardware', so too could the advanced computational processes of AI lead to new forms of consciousness, beyond mere simulations or number crunching. If it feels like a rubber duck, smells, sounds, and tastes like one, it's probably a rubber duck. Anyway... i think this video is totally overselling current advancements.
youtube AI Governance 2024-03-23T03:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzCygy9T1n_64iOkgR4AaABAg.A1vH391OrhiA4yOt3Wa53S","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy0O-CQLXFpU5fOhaN4AaABAg.A1oHMIzGxRQA1tWDetlVZp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgywFGOAx4dZTOYC_d94AaABAg.A1WPgARhle5A2pLZk6LMyl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgywFGOAx4dZTOYC_d94AaABAg.A1WPgARhle5A42jBf9jC9s","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgywFGOAx4dZTOYC_d94AaABAg.A1WPgARhle5A4LJjkXSMap","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugx9VGpxv54eN6E5exx4AaABAg.A1NaaAgWZIEA3hm2WnZGx9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw5nuo8zQ-DMwgHoDB4AaABAg.A1HxJbuNxWYA2Da1T_6fFh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzRe7QQ09ihLK-Fe7F4AaABAg.A1AZGAvi912A1AZuQ4MjAK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugx4O0-5ToQM_UVDQwp4AaABAg.A15xBsGOCpOA1JKVQeWDaq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxj208pvJqbA-5wbVd4AaABAg.A0PwGwSS1J_A0bUeSgGWNl","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]