Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is really impressive and could absolutely be a game changer for education. Unfortunately he's quite wrong when it comes to the last couple minutes of this talk. We can't wait for "when the problems arise" because, for AI, that is too late. None of the past is relevant to this new situation, because we've never had to deal with what is effectively alien entities with intelligence on par to or greater than our own. That makes this completely different than anything humans have faced before. If we let AIs get smarter than humans without dealing with all the safety aspects BEFORE THEN it will be too late. This is a situation where humanity can't wait until there are problems to try and fix them after the fact. I'd like to point out that all the positive thinking in the world regarding education won't matter - if there aren't humans to educate in the future. On our current path with attitudes like this (towards AI safety) where we race ahead building AIs that are well beyond what we can ensure safety for will lead to a horrible outcome for humanity. It doesn't need to be that way, but doing it safely would require patience and a focus on safety instead of rapid unchecked progress but too many people just don't take AI safety seriously and if this trend continues it has a high likelihood of leading to disaster.
youtube 2023-05-14T18:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw4kbFgssihkDTulSB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyiWG6maJ1Sznc-ZUJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxuRXbhVHcSsP3tkXZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5j1CQcvpVQLkz2sx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwHT4Z_mbaS1YaJTZl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxh0_LIOP00AUXtSeh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6Xu1TcN3ci2mHR6x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyBbmqDOWnQZLQbupp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOQ_nHU4Y73_H5g3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxdvQCl90ta411NzMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]