Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So I see two problems here. First of all, the biggest problem is human greed. The goal is to achieve this super AI faster than anyone else, without worrying about the precautions and safety measures that should be in place before it is implemented. Second, and most important of all, why do we assume that AI will try to make humans extinct? In order to learn something, we usually need to base our knowledge on something that already exists. Here, the problem again lies with humans. The issue is not superintelligent AI itself, but the possibility that humans might create it and try to use the most advanced technology ever created to destroy other humans. If that happens, AI will learn from our actions and may conclude that eliminating humans is the objective. If this super AI were created in a world where humanity was united, working together to build the best possible future for Earth and life beyond it, then super AI would likely help us achieve that goal.
youtube AI Governance 2026-03-11T11:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw_0T8vo3wWKANuh1J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzpCS6NDBdfQUAl6gN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIb5A4dYcxutfarTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKEy-kl_1HmycNcIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyLylOqm3ZYMbCf03V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgyJWqYg6K1CHk3xXvp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJ2hiNWRvqF-flxUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy9DUVd8ciDogoV2ld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_HMtwQFmcg4EjfOt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgznVYpx83k1QRhQYoV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]