Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, and it was a coin flip whether we survived.
This time, we're building nuk…
ytr_UgzrOrdOn…
G
I was devastated when I found out that picture was ai, bc is gorgeous, but I’m …
ytc_Ugz_lRUxy…
G
I even myself who tried to use her fone the less i could PC the same for everyth…
ytc_UgxOmKAza…
G
Man.. Year 2017 - when I started writing my Bachelor's Thesis aimed on the Safet…
ytc_Ugw91Ue4D…
G
I'm honestly scared if i can achieve my dream BEFORE Ai does it before me, look,…
ytc_Ugx_EMT4W…
G
I’m still encouraged by the life saving, cancer curing type benefits of AI in th…
ytc_UgxaXwAeH…
G
Literally the algorithm will show me this video and then the next video that pop…
ytc_Ugx2lAw3I…
G
I’m honestly not worried at all. The smarter the AI is the more complicated pro…
ytc_Ugw6INuVQ…
Comment
I got it right with not the same result (1/51) and frankly we can make a more precise answer by understanding the impact of the device algorithm:
- take 1000 people, all real negatives, you got just the 50 false positives at start, then convert 1 real negative to a real positive and we should still have 50 false positives + 1 real positive, so at 1st glance the answer 1/51 seems more valid (1/(1+1000x5%) = 1.96078431%).
In fact, we still need to make it better: does the 5% false positives applies to the number of tries or to the number of real negative?
- If it's to number of tries, then my answer is mainly correct.
- If it's to number of real negative, then it might be 1/(1+999×5%) = 1.96270854%
- If we don't know the answer to this specific question, we can have a 50/50 chance the answer is one or the other, so we can go like 0,5×(1÷51)+0,5×(1÷(1+(999
×0.05))) = 1.96174643%
Finally, we need to acknowledge that we don't know the algorithm well, for now I assume real positive are NEVER part of the 5%, meaning my real positive might sometimes be part of the 5% (like in the video, but the video assumes it's ALWAYS the case), so my 1st answer, 1/51, might be corrected to 1/((1-1×5%)+50) which is also 1.96270854%, and if so there is no more difference whatever it's based on number of tries or number of real negative, so the final answer will be 1.96270854%
youtube
2026-04-03T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1YBFXMDyrmvvejUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPS1hPvyMyM0HYdBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNgpsXVmL9Vbpk9uV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyP5fFMcQfwd1vCBbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMyvm54nMlCWTg0ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrM96f9GKnUM-S8VZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE2bYS54-Z-nz4iGF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1X-LcwPD9zeHbNkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwkw1dO0J-tSGZVJ3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNn-7yMfBAW5nulcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"}
]