Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After the nft I felt burnt out and less drawing mood, ai almost killed it. Did n…
ytc_UgwOwEEQO…
G
there r some right loonys in the world lol you do seem to pick them on this chan…
ytc_Ugwzt3FnZ…
G
Just the simple FACT Lemoine was fired for voicing this Ai REALITY .. shows us G…
ytc_UgwcjcjCG…
G
It's hard to see an outcome where AI doesn't overtake humanity in the next 100 y…
ytc_UgxRhF524…
G
2nd guy is annoying and arrogant. For someone being in Data Science, hes quite o…
ytc_UgymPuIxg…
G
How do we know this video is not AI produced to put us in a relaxed state so we…
ytc_UgzzA02dE…
G
Not even just Putin. North Korea, Iran, African Warlords, drug cartels, terroris…
rdc_oi32ok0
G
Im a painter and I use stable diffusion to make my moodboard type references, bu…
ytc_UgzeEvX3L…
Comment
The Turing test is a terrible way of testing for what it wants to test. It was conceived over 80 years ago in a time where the ONLY conceivable way that anyone imagined having a "normal" conversation with a machine was if the machine was sentient. Of course, 80 years later and with computers a gazillion times more powerful than anything imaginable at that time and with access to hundreds of millions of examples of human communication, a complex data model can emulate (and very accurately) normal human response. NONE of that means that the algorithm is sentient and this guys knows it (or should know it). Either he does and this is all a personal publicity stunt (he seems to be launching into a "speaker" career) or he's just naive and fell for the AI (not the first time this has happened). There's no way a data model springs into consciousness without it being explicitly built into the system. And we are nowhere near today to even understand how that would work, so....no...pretty sure the AI is not self-aware.
youtube
AI Moral Status
2022-07-01T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_Ugy--nBrbwfUY0dBkOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwzqpzm30HX4C3wyNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyuIMACQCCGvmzg0pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyC6viuep8ppeuEP094AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzpHU5Gxc7f97ZvyU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]