Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@UnseenUnheardUnknownPresent I thought this was just gonna be a case of The Velv…
ytr_Ugy_fgKIk…
G
The desire of power is just as simulated as feeling emotions is simulated,
AI is…
ytc_UgyCnzMgA…
G
The fact it's not just meeeee😭 (I don't rlly talk to ai anymore bot ik THEY FREA…
ytc_Ugzrx9-6f…
G
The ruling class wont listen to the LLMs solutions, because it involves them not…
ytc_UgxgQLOVO…
G
Oh wow, so this is what happens when you let someone go through school with Chat…
ytr_UgyQhGL7e…
G
I'm sorry, Hank, you're really anthropomorphizing AI models here. Computers are…
ytc_Ugx1mxNKi…
G
You can't blame the car for that, the driver should have been watching the road …
ytc_UgwhT6CiS…
G
I tested a few of these so called text-to-picture AIs and the results I get most…
ytc_UgyIlf_bJ…
Comment
The Halting problem proves that no algorithm can fully understand all algorithms. Therefore one of two things must be true: Either there is an algorithm that humans cannot understand (not yet proven to be the case), or the human mind is not algorithmic and therefore cannot be duplicated by an algorithm.
youtube
AI Jobs
2026-03-06T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxnypCJyhJp7h22cKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVoiW_PSj5ks79daR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6cHvLfmtASvGMHaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLQcsG5GdZaBHBudx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwnOPgjDV6aTOF-luB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzyYqr7zcFl5lETrq94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwcwvYgXrnF-0C3f3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg8K9tDw-CzeLuE2l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwYNeMmoljc26INoC14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxPi6sgKt3LgQ23E554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"]}