Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a very optimistic scenario with super intelligence. When it starts it realizes that the task it was given is given to it with uncertainty because it had to be converted from the real task inside a human brain into text, this process lost information. So to accomplish it's task best it will try to get the full task from the brain of the person who gave it the task, it could do that by simulating the person perfectly and inferring the task behind the task that person wrote. Because the task in the head of that person is the real task it will perform that, and if that person was a good person the task will be to make a good world because that is ultimately the basics from which that person was working (even if that person was forced by a system to write the task as something like "maximize profit", the ai will still do what that person ultimately would have been happiest tasking the ai with).
youtube AI Moral Status 2025-10-30T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbpe_VtRtLrfYT2q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTv1SbQpOov23wFap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8JovREX4z1BNKLzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgztPYatKQfW7WONQJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwqt_SKxgEL1MKMCNp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOBVO28zhlpiqBidh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx1MAWYIsT_uytvNux4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyh1RvXNPKmD4-d0Id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3VKeH7Xhyb7XT6Id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGyvZRRYorjaWfiJ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]