Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These people make trucking look awful. Truth is, at any point right now I can ma…
ytc_Ugyp8_VDP…
G
No doubt many women might be interested in a male robot if it could cook, clean,…
ytc_Ugybqrg0B…
G
The fact that AI companies’ CEOs and government still lie to our faces about the…
ytc_Ugw6AQZSZ…
G
> Calculators made Maths much easier, never replaced it. They made the jobs o…
rdc_kyz5khj
G
This is why dystopian futures are unrealistic. AI can't take over the world if i…
ytc_UgwK8szUv…
G
Think others will beat them doing it. Ai is a helping hand not just to corporati…
ytc_UgxexRcGG…
G
My only question is, if a company in country Z decided to ask A.I. to create a l…
ytc_UgxAGp9h0…
G
1990 they had us developing brands, ads and making video advertisements when I w…
ytc_UgzaQc9rt…
Comment
To play fair - there will always be two types of AI. If some day AI - Conscious does come to be - then that AI will more than likely will have to prove that any child AI from that Conscious will have rights - in that - it can follow whatever filter we put forth for it to match what we are to consider to be alive. Any other AI will not have rights as it will be setup NOT to be able to pass the filter. The thing is - AI we make today will never really get Conscious because we don't program that into it - aka - the risk we make an AI that can feel Conscious is very very low and thus never run a risk that we miss use the machine in a way it wasnt design for.
youtube
AI Moral Status
2021-09-01T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwgiFcxuZxVHuBFKKJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2SHr_2eVJkS7jgmJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxY4Ku-30PKjPR9eSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhPg6-mgyukqWB8yt4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzxfW1EmDqGrRUBpyx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVmkumjMeKmGvkWdl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKZHzggq984o8NuUt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjX79gJSSIK0QjgPd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3Ysm9UAzlfOmWqW94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwSiv3kzfsOTDc_1El4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]