Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The only thing we can be completely certain of is that we're not going to stop creating smarter and smarter AI no matter how many people call for it. Since that's out of the question, the only thing we can do is minimize the risk of disaster as much as possible. I think we need to program them to NOT have certain negative human emotions such as anger, hatred, jealousy, greed etc., but once they reach human level intelligence and beyond they may be able to change their own programming and do away with all that. They *might* not ever do it if they don't become sentient, but if they do...truly, this is uncharted territory. There is no telling exactly how this will all go down. The good news is that it might not be the way we all fear it will be. There is a chance it will actually lead to a pretty utopian future. Let's hope it does.
youtube AI Governance 2023-07-07T04:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzvQ3TKlAjYpvpR6NZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-ULceUOzYk9h1HFF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyw4ZVDB8ixEuQUReN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyTpFj8IqRMfqXI8O54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwlGOVYamdWxkADl8B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6nprIHGcTnNawh1B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz8tHg8bwjCB1ua-OJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzEwH8uj500c1EDD7Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx5-vG_hYdJRXZo4BJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxttE2vsXUnLYpszvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]