Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@AndyHeynderickx For me, Roman exhibited an attitude of pessimism on the back of a few general and vague premises, but made little in the way of meaningful arguments -- at least toward any interesting, novel or existentially-concerning point. Not, at least, in this interview. I agree that AI poses risks -- perhaps even existential risks -- to humanity. Sime trivial ones were mentioned by Roman (such as "i-risk"). However, the popular conceptual picture of AGI "turning against" humanity (implied throughout the interview) requires many steps of logic no matter which hypothetical causal path you run down, among those steps (very frequently and specially here in Roman's case) that you somehow get artificial pernicious intentionality from an evolution of artificial intelligence. This is a typically overlooked step in the logic and it was again missed here. It's hardly obvious that any amount of build up intelligence in AI/AGI will lead to the emergence of artificial intentionality. What seems more likely is that that this will not happen. Intelligence, intentionality, consciousness are often conflated. Roman doesn't seem to consider any of these nuances at all. And the many more associated nduances.
youtube 2024-09-16T02:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugzrd39tEyjjXSkYrPl4AaABAg.AD59lECMKJtAD5AK5yeFQt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugw_oUTPvkZvUAZMXTZ4AaABAg.A9fxu00uBWzA9fyIl5WhNs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugy7qB6VjNJiwEdJP1h4AaABAg.A8jHrAKTHp1A9wmlpSsNII","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwpody1QhIuO8bFVE94AaABAg.A4f7IQxEqQoA8QUsZb7BNG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwpody1QhIuO8bFVE94AaABAg.A4f7IQxEqQoA8R1E0D7qbR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxPQTWkwALaEzc6GuZ4AaABAg.A4bcrwKd-8ZA4k5qWB_Ceq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxPQTWkwALaEzc6GuZ4AaABAg.A4bcrwKd-8ZA4lTlCFcS_P","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwHgKmDvhYLMl7-Dnp4AaABAg.A4TsNZ1PLy2A4TxelVLyJs","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytr_Ugxj2bYDcZz5H-KZP2d4AaABAg.A4TnkaIE9F4A5P_XAoclgl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugxj2bYDcZz5H-KZP2d4AaABAg.A4TnkaIE9F4A60JrAS0VT9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]