Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sincerely, this is overblown. Base models are chaotic mostly because they just t…
ytc_UgzvDis2W…
G
@masonblaster3997I thought so, there isn’t any ways of having an AI detect that …
ytr_UgxSs26CQ…
G
James, I'm so sorry. The neural transfer was successful but they've reclassified…
ytc_Ugw_GmpjY…
G
Andrew Yang's point about AI displacing entry-level jobs makes me think of how q…
ytc_Ugxxr5XWb…
G
I don't think it will go anywhere. It's supervised full self driving, you are re…
ytc_Ugzy0l3Us…
G
To be honest, we should just figure out a policy on this before making anything …
ytc_UgjQcetBh…
G
the capabilities of AI is truly amazing and scary. We can all find ourselves obs…
ytc_UgwRsWRU9…
G
I get where you're coming from! Engaging with AI can feel a bit strange sometime…
ytr_UgyTcEGDB…
Comment
Elizier has gotten a bit better at explaining things to the masses over time but he still has a bit of work. I think he needed to emphasize how misaligned goals aren't compatible with human life more forcefully. The sex and icecream examples do show how training on a goal doesn't result in alignment and the way it is misaligned is not necessarily predictable. But I think he needed to go harder on the idea that when there is even a slight misalignment, if you give the AGI essentially infinite power (which is something an AGI is claimed to virtually have, at least over time as it vastly increases its capabilities) that the maximally optimised outcome looks less and less like a place where humans exist (such as if the AGI needs to take over the entire surface of the earth to get maximal automated factories running to produce the most air fresheners for an air freshing company that happened to fine tune on the one that went full AGI).
youtube
AI Governance
2025-10-25T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]