Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some amount of anthropomorphic language is necessary to communicate to a lay audience, but the object-level research does not rely on anthropomorphism. These aren't Dave's personal takes -- they are the takes of the field of AI Safety. What is a want? Just a world state that is preferred by an objective function. Research found that LLMs have internally consistent preferences. ("Utility Engineering: Analyzing and Controlling Emergent Value Systems in Als") What is a need? Almost no matter what goal a system has, it is an instrumentally useful subgoal to continue existing. That was hypothesized several years ago, narrowly mathematically proven a few years later, and now we see it empirically. AI systems are not humans, but we have more in common than you'd think. You have to forget about the usual correlates and look at what AI Safety researchers have shown about the nature of goals and intelligence themselves.
youtube AI Governance 2025-08-28T22:5… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyUXMKlzhA9MPuE6A14AaABAg.AMJSNePnw3sAMJtgFdF6d8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMJu-gEggb8","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMO8JWJ5e5y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwY4BbWnnGOfgWa4ad4AaABAg.AMJMw1anxiJAMpliO-Kv3E","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyhoHKm57IPLa6U9654AaABAg.AMJJ4un17SsAMJx1deeK6S","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgweSPVI-jPBlc1EarZ4AaABAg.AMJGMOsdkQOAMLqmEHNWJn","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugyt9aueFxIIF1NMBup4AaABAg.AMJB_Ns_waBAMJHGmvArQE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzM1ZfqSY52KUb0OtN4AaABAg.AMJBTFEZECQAMJD7XM03OY","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugx6aDcBy9hJKaJwBWF4AaABAg.AMJ9FCazYPvAMJI7kp4w3i","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwGkoT9VHBrbj4IsUp4AaABAg.AMJ6IUVdBsRAMJEExhqzMu","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]