Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also, when people keep asking employers to pay them $25 an hour to put a burger …
ytc_Ugwfn2Svg…
G
If we pause AI, the sad thing is that China won't. If China gets AI before the w…
ytc_UgzWk-R5w…
G
This is Ironically a Possibility Timeline wise. Simple solution, keep A.i on a s…
ytc_UgzoXd4BZ…
G
I think AI art is pretty cool. It’s funny we had this discussion where Detroit b…
ytc_UgyopzZoi…
G
Its a scam to get money into A.I folks. Silicon Valley dosent have any products …
ytc_UgxyCXN6L…
G
The kind of reasoning that is in this clip is just humans trying to cosplay robo…
ytr_Ugw7l8Ovl…
G
3:50 ok, yes China winning the AI race(its a race now ok) is a risk, but what is…
ytc_Ugxi3PWee…
G
@caav56 Not really. Even autonomous drones operate on electronic chips which can…
ytr_UgxazD4F6…
Comment
AI will never be conscious. First of all it isn't even intelligent. That is why it has the word "artificial" in front of it. It is an intelligent algorithm created by humans to *mimic* intelligence (and I am a computer scientist). YouTube personality acollierastro demonstrates this perfectly in her video on the topic. You can teach a human the difference between a stuffed cat and a real cat in a few seconds (or sentences). THAT is the result of intelligence. In order to get an AI to even "recognize" a stuffed cat is even a cat, you have to expose it to thousands of sample pictures so that the algorithm has a chance to "network" its values together in order for it to do the same (just like you had to do with the original cat). TO THE COMPUTER ITS JUST A BUNCH OF NUMBERS. Not pictures. *Humans* also have to train them by telling the machine whether they got it right or not.
youtube
AI Moral Status
2023-12-11T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXhP7pSD8_8qyhE-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugybv70g3RDTuZYraSt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy4mWkScrRUlNlE9vd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKdnfynG9C3jGzQiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwCL_S78aG5YI1vpc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhcjPHGQojKfJYDqd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJea3O-Nl3SfoevE94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNSShD2JWOJIPZ7DV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzgTv4bsCwSesMZMSh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzP66sD8aDDxiydLkR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}
]