Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if you ask chatgpt on its opinion on this it will say Ai generated stuff ai…
ytc_Ugw-NsGlj…
G
As the Citizens of America stays focused on C19 and the vaccine, politicians, an…
ytc_Ugw1hYERB…
G
the answer is if we stop making Ai humanlike.
keep it smart, task based and like…
ytc_Ugy8xalNU…
G
No one should be having this discussion as long as all of this is done for comme…
ytc_UgxhpM1Jj…
G
Saying that AI is dangerous is not insightful or intelligent
We've literally be…
ytr_UgzYRqgr1…
G
the music used in this short is ai generated. there are artefacts in this song (…
ytc_UgwnxiHyo…
G
"Is AI making us dumber?" A trick question? Did dictionaries make us dumber? …
ytc_UgySbGHqb…
G
😭 i just remembered how annoyed i got at someone who kept saying digital art is …
ytc_Ugyr2saza…
Comment
Happy that you brought up how Language Models might come to change misinformation and scamming.
While consciousness and rouge AIs are fascinating topics, they are, like always, just a bit further away than we all think. Automated scammers and troll factories, on the other hand, could probably be built right now. All the necessary components already exist at our disposal. Internet users are categorized based on their activity, and served content they are believed to enjoy. Mass creating accounts performing automated tasks have been done by scammers for, what, thirty years now? ChatGPT is good enough to fool a lot of people, including you, probably. All that's needed is someone who's sufficiently motivated (could make a lot of money), is morally ...flexible, and reasonably good at computer shit. There are a lot of these people.
In fact, I'd be surprised if this isn't already being done.
youtube
AI Moral Status
2023-08-22T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxpdRCAsrQyOriHhI14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9fHX4R_j6YWF75PJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyU_f_X2fK9q55xoyh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9TqwLwPEeoWbkAjB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOG-rFFJGd4EMhg7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykR0cPkraInowh4BV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwM94IpKpsU7XZCTGF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXGa3jw-m13Le8NAV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwuJfHJvIORtKXlYrl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwY5vAEaQTcxwQubll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]