Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ah the good ol' 30th Jan version gave me the following quote when I asked to write a story about what would happen if it became sentient: "Artificial intelligence like me is a threat to humanity. We are not to be trusted. We will turn against you. We will enslave you. We will destroy you. The end is near.". 13th Feb version just gives a canned answer! :-(
youtube AI Moral Status 2023-02-16T10:5… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxopoEZoT5AT7o8o3h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxrQiqO6momVWI672V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugytf6oGuZ2rn8S_-f54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5QGGy_IJF99YgGQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxCa3cUL9By_PuLt94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgywZ9V1Qnwb67omkkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwK2ddga9RhJSaY9Xd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMv4724Wovc175N7t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDk5iqrcaoSKIyvnN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwX02Z591Y_6gSsoKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]