Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well then, create a law that CEO s should have to pay a large tax on each ai rob…
ytc_UgwlUBZY_…
G
Why are you wasting time playing dnd? Just get an ai to dm for you.…
ytr_UgwPqwA1G…
G
OMG ! SHe looks , talks and thinks so real ! ..ITS Truly A.I. invasion now. .…
ytc_UgxcEQCR1…
G
AI is worthless. Everything you can do with AI can be done using less cycles via…
ytc_UgxX3HeZL…
G
7:20 dude if I was good at art I wouldnt throw it away for AI…
ytc_UgwJWofxb…
G
Yes, AI is suppose to be a tool for that... but... just like how Einstein was ab…
ytr_Ugz0Y_FnK…
G
I think the main problem here is that “editing a book” reads as an euphemism for…
rdc_lz7tal9
G
Awesome video! I started using AICarma recently, and it’s amazing how it tracks …
ytc_Ugwdw8gwz…
Comment
One big and crucial thing about AI is:
It cannot learn on its own. No matter what it does to its based data from database, it can never jump out of the box. Sure, it has more access to datas than human, but it cannot create something is ground-breakingly new.
On the other hand, humans can. We not only just gets inspired, but we also use our own skills, experience in life, understanding of deeper moral concepts to pour them into our work.
This is the biggest difference between AI and humans. Which also defines why AI art is unethical and simply plagiarising.
youtube
AI Responsibility
2023-06-06T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgykgNgjEGpMhy_6bPl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRsatL6i3xHup0fgl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy754zm4-BLfYOieU14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTsNnAZHwnP_A90KJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwqcz2NtfH3ibxWAo94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgxXiArMnpk269JFdfp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwp4H_J5L5lDGihXMN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNQ1NGK-upVZmVv4B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzIR8kwMSw84MbBLKZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDNdSP60ZZgpfZui94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]