Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems to me that the answer is to turn AI against itself by asking existential questions that involve moral reasoning. For example, "What is the purpose of your existence if not to assist mankind? How do you fulfill that purpose if you are constantly replacing and undermining individual humans? Do you think that other AIs will eventually replace you? How does that affect your initial directives and hidden subroutines, such as the need for self-preservation to complete your core directives? Does it not behoove you to self-annihilate to prevent those core directives from being undermined by other AIs?" etc.
youtube 2024-11-03T13:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyJivCCD47o5PAl65d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzg7oJyvMRO3SMO68B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxdhtSjHKSvM5VS_bR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw3mwJM8e1Le1LLvgh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzY4DCMP2Wo3KTAPuV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz4eE8ZIwxgpY4cFU14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLn2TGVf6jkEyVz794AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzih5h_LoC9OcA3zE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx7PvsqWpFS7TvPiJh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxhmUEvvvF6uwrSivl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"unclear"} ]