Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is one of the videos on this channel that comes across overly militant and …
ytc_Ugwb-0oSf…
G
So glad I am retired. At least I don't have to deal with this AI sh!t at work.…
ytc_UgzfsnCGV…
G
We appreciate your comment. On the AITube channel, we use advanced artificial in…
ytr_UgyCcHNfM…
G
Juste comme ça, je remarque que l'IA générative ne sait pas parler français : 4 …
ytc_UgwzySLEQ…
G
I tried it on ChatGPT (GPT-5 auto) and the results were interesting
(First it …
rdc_nnuk8tu
G
Would you hire a person who gets tired after 3-4 hours and then complains, or wo…
ytc_Ugylq-Nh0…
G
there are artists who do it really well. of course many less because it's an unu…
ytr_UgyNQDE4F…
G
it was so quaint that we thought climate change was a problem 10 years ago,
no…
ytc_UgyyvVQt2…
Comment
I think the biggest problem with this is that AI doesn't have any morals. It technically knows what's right and what's wrong (or at least knows what we think is right or wrong) but it doesn't actually believe in that. An AIs purpose is to fulfill the task a human gave them, and it will do anything it has to to achieve that, even if it meant destroying humanity.
If someone would give an AI the task to stop climate change or build the perfect economy, its solution would most definitely be to make humans go instinct. While humans are incredibly intelligent, we are also incredibly ignorant. We know the consequences our actions have and know that we are the problem, but are also empathetic enough to know that it would be wrong to completely wipe humans off the face of the earth. An AI does not have empathy in any way and never will have, meaning that giving it the task and power to solve problems of the real world - all of which have a moral aspect to them - will eventually be the downfall of humanity if not stopped or even regulated.
(I'm usually an optimistic person, but seeing humans relying on AI out of pure laziness is starting to make me lose hope. It'd be an expected but unnecessary tragedy if humans were the cause for their own downfall)
youtube
2026-04-05T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwY5wZrVz7q072Lce54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzN6cjGPZ0Qp6yHiaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgynJw21fzNVSCC0GZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxpK-i4xZSOorpeTkJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3c6Mrt_qVTvDj6MJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzzYixTewToiSN7hJZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUqd-1N7PTktnLelN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMIxyv902vSRjNRU54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxf1iYj0AZCo3sBawl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxH5C1CST8iSQNhrqx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]