Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need 5 parts to build AGI (not SI). 1. A world model (which google is working on at the moment) to ground models - incomplete 2. A perception model (hear, see and interpret - your LLMs, vision etc including robots) - Mostly solved in isolation The next 3 models are required for AGI but have not had any progress: 3. Agency model (The ability to generate their own goals, not just execute instructions) - Acting on the world. 4. Social model (theory of mind, ethics, beliefs and values) - This is where most of the alarm from AI experts comes from. How do we codify a social model? 5. Meta-cognitive model (self reflection, self improvement) These are abstract ideas. But the otherside of this coin is where is the line where an AI model is considered consciousness and if its aware of ones self is it even fair to impose our ways of thinking to it. The other arguement; who is going to connect all these model dots and not think about the consequences "Is it wise of me to put this AGI model into a robot that has access to the internet and the world?" We (humans) don't not fully appreciate/understand the emergent properties of these abstract models which are only going to become more abstruct. Do we need a nuclear-level AI castastrophe to understand the dangers? I hope not. Should we fold to AI fearmongering? I hope not.
youtube AI Governance 2025-12-04T11:1… ♥ 34
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwzIdl6yeQbi73lCEJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxUZq5GI-i5G8YoN-R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQpkIbJLwNuenq_o14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8LXaE0mzLoUfYADB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPTaRziWTE1ixtRO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0UrqOw6V7UjHozHN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzS5Q8aI6XwRcpxZxt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz39NFO6piztQ2zlY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaJ0LVI4kuYsyZtTd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTQuXuD5xQMd-Wk9J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]