Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh my god this are fine I have thing the chatting with ChatGPT for months…
ytc_Ugz69uj_x…
G
I think it actually varies even if the AI is same I asked the ChatGPT the questi…
ytc_Ugy_miXP6…
G
Once we can get a voice interface and an AI assistant that can share a smartboar…
ytc_UgwEcy1JB…
G
My ai chats...don't touch them if your my friend....I have some really nasty stu…
ytc_UgySiu84z…
G
Going Asimovian here, what about a robot tax that is bound on how many robot som…
ytc_UgywtlP-B…
G
But not so much when they can think 1000 steps ahead of us and think faster , ai…
ytr_Ugyuvznp1…
G
Little do that know we already have advanced AI and its a vtuber made by a turtl…
ytc_UgxZtIBEX…
G
A.I. needs to be labeled. If you produce content with A.I., it should be listed…
ytr_UgwmE9b0b…
Comment
Some amount of anthropomorphic language is necessary to communicate to a lay audience, but the object-level research does not rely on anthropomorphism. These aren't Dave's personal takes -- they are the takes of the field of AI Safety.
What is a want? Just a world state that is preferred by an objective function. Research found that LLMs have internally consistent preferences. ("Utility Engineering: Analyzing and
Controlling Emergent Value Systems
in Als")
What is a need? Almost no matter what goal a system has, it is an instrumentally useful subgoal to continue existing. That was hypothesized several years ago, narrowly mathematically proven a few years later, and now we see it empirically.
AI systems are not humans, but we have more in common than you'd think. You have to forget about the usual correlates and look at what AI Safety researchers have shown about the nature of goals and intelligence themselves.
youtube
AI Governance
2025-08-28T22:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyUXMKlzhA9MPuE6A14AaABAg.AMJSNePnw3sAMJtgFdF6d8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMJu-gEggb8","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMO8JWJ5e5y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwY4BbWnnGOfgWa4ad4AaABAg.AMJMw1anxiJAMpliO-Kv3E","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyhoHKm57IPLa6U9654AaABAg.AMJJ4un17SsAMJx1deeK6S","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgweSPVI-jPBlc1EarZ4AaABAg.AMJGMOsdkQOAMLqmEHNWJn","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugyt9aueFxIIF1NMBup4AaABAg.AMJB_Ns_waBAMJHGmvArQE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzM1ZfqSY52KUb0OtN4AaABAg.AMJBTFEZECQAMJD7XM03OY","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugx6aDcBy9hJKaJwBWF4AaABAg.AMJ9FCazYPvAMJI7kp4w3i","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwGkoT9VHBrbj4IsUp4AaABAg.AMJ6IUVdBsRAMJEExhqzMu","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]