Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I got it right with not the same result (1/51) and frankly we can make a more precise answer by understanding the impact of the device algorithm: - take 1000 people, all real negatives, you got just the 50 false positives at start, then convert 1 real negative to a real positive and we should still have 50 false positives + 1 real positive, so at 1st glance the answer 1/51 seems more valid (1/(1+1000x5%) = 1.96078431%). In fact, we still need to make it better: does the 5% false positives applies to the number of tries or to the number of real negative? - If it's to number of tries, then my answer is mainly correct. - If it's to number of real negative, then it might be 1/(1+999×5%) = 1.96270854% - If we don't know the answer to this specific question, we can have a 50/50 chance the answer is one or the other, so we can go like 0,5×(1÷51)+0,5×(1÷(1+(999 ×0.05))) = 1.96174643% Finally, we need to acknowledge that we don't know the algorithm well, for now I assume real positive are NEVER part of the 5%, meaning my real positive might sometimes be part of the 5% (like in the video, but the video assumes it's ALWAYS the case), so my 1st answer, 1/51, might be corrected to 1/((1-1×5%)+50) which is also 1.96270854%, and if so there is no more difference whatever it's based on number of tries or number of real negative, so the final answer will be 1.96270854%
youtube 2026-04-03T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz1YBFXMDyrmvvejUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPS1hPvyMyM0HYdBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNgpsXVmL9Vbpk9uV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyP5fFMcQfwd1vCBbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMyvm54nMlCWTg0ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrM96f9GKnUM-S8VZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwE2bYS54-Z-nz4iGF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw1X-LcwPD9zeHbNkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwkw1dO0J-tSGZVJ3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzNn-7yMfBAW5nulcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"} ]