Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s another incredibly shortsighted plan though. Say that the AI answer is awes…
rdc_nuflr1p
G
Did you notice the number of Tech Company Execs that were present at Pres. Trump…
ytc_UgxXu5qOL…
G
Very Odd, that next to Tucker hangs a Picture of "Fatima" Our Lady of Fatima” is…
ytc_UgzCT7Xzs…
G
@boltaurelius376 I asked it trivial questions, and got trivial responses, exact…
ytr_Ugwh6frJ4…
G
This is what I have named the "snake eating its own tail" theory. Capitalist bus…
ytc_UgyPEVbll…
G
i can assure you: the difference in quality between AI writing and good human wr…
ytr_UgwiSgV0W…
G
I don’t think AI has a racism problem, I think it has a getting to handsie with …
ytc_UgyLx_pHh…
G
Dauerwerbesendungen - is how we call this type of content in German. The best an…
ytc_UgxKmHUMy…
Comment
We need 5 parts to build AGI (not SI).
1. A world model (which google is working on at the moment) to ground models - incomplete
2. A perception model (hear, see and interpret - your LLMs, vision etc including robots) - Mostly solved in isolation
The next 3 models are required for AGI but have not had any progress:
3. Agency model (The ability to generate their own goals, not just execute instructions) - Acting on the world.
4. Social model (theory of mind, ethics, beliefs and values) - This is where most of the alarm from AI experts comes from. How do we codify a social model?
5. Meta-cognitive model (self reflection, self improvement)
These are abstract ideas.
But the otherside of this coin is where is the line where an AI model is considered consciousness and if its aware of ones self is it even fair to impose our ways of thinking to it.
The other arguement; who is going to connect all these model dots and not think about the consequences "Is it wise of me to put this AGI model into a robot that has access to the internet and the world?"
We (humans) don't not fully appreciate/understand the emergent properties of these abstract models which are only going to become more abstruct.
Do we need a nuclear-level AI castastrophe to understand the dangers? I hope not. Should we fold to AI fearmongering? I hope not.
youtube
AI Governance
2025-12-04T11:1…
♥ 34
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwzIdl6yeQbi73lCEJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxUZq5GI-i5G8YoN-R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQpkIbJLwNuenq_o14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8LXaE0mzLoUfYADB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPTaRziWTE1ixtRO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0UrqOw6V7UjHozHN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzS5Q8aI6XwRcpxZxt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz39NFO6piztQ2zlY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaJ0LVI4kuYsyZtTd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTQuXuD5xQMd-Wk9J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]