Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Running health spas, commune housing off grids, niche travel escapes. Niche will…
ytc_UgxR2dPZq…
G
My my, are you an AI too, mam?
Too good
Or maybe you're hearing AI and just ...…
ytc_UgzU7M6xi…
G
We should remember that this is a depiction of how human beings think AI might b…
rdc_ohzfik5
G
So first time i went to talk with gemini 2.5 i spent 3 hours on the nature of co…
ytc_Ugyr0oHu8…
G
She quit to take part in a StarCraft tournament, claiming it was her lifetime dr…
rdc_cjoxghj
G
when AI does democratize art, im going to vote against any artwork that uses AI…
ytc_UgxiaAPuo…
G
Great video !
I work in tech and I draw as a hobby. I don't understand how gener…
ytc_UgxzxrCXi…
G
CScel mad cause he wasted 10 years to learn to code just for chatgpt to do it al…
ytc_UgwHakhAY…
Comment
Philosophically and scientifically you are making a bunch of leaps in logic. We already have machines that surpass the limits of human intelligence. So far we have not been unable to make a machine that desire anything. I have not seen any suggestion that we would even be able to. A computer could be trillions of times the raw "intelligence" of humans and still have no desire. We could easily accidentally create machines that destroy us but that does not mean the machines are self aware. So before we should ever ask if robots deserve rights we need to ask if robots could ever desire anything? The most advanced AI on the planet may be able to convince it is human while having no more will or self awareness then a calculator.
youtube
AI Moral Status
2019-08-14T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwPxg6X6eYcSf3MNVl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw_Do7TEPRmKjfC_IJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwu475XbyBNRR7DQzZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzei0lpJr7UXRTL8nx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxcTSss5Zr3t6EVoz94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyOwpvMLnjdaWRhlVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztoeP-SEHVnbKHfWh4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyvkIO6mc0x5c0jxsp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzTJh8Z_amz7izsx8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj6R1otD4Ujgt1hMd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]