Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And midjourney will get copyright by all the artists they stolen form to train t…
ytr_UgyDxM_8c…
G
If you can't use your arms and creative mind and instead use AI to make the art …
ytc_UgyRI85KT…
G
AI doesn’t need a paycheck, health insurance, retirement plan, 401(k), sick time…
ytr_UgzKK4pCr…
G
I'm constantly inpressed by chatGPT, and I use it often. But I know its not pe…
ytc_UgwU6QedP…
G
dont be scared people humans always find a way this is just the beginning we wil…
ytc_UgzTEEBpu…
G
I have been a paying ChatGPT Plus user for a long time, and I specifically relie…
rdc_n7ks3cw
G
I agree with you but allow me to push back. Students are not only using AI to le…
ytr_UgwDchQiJ…
G
And that's where they will keep our stolen data to rip us off later. AI is just …
rdc_lp8bfw2
Comment
Everyone wants to know what a robot is thinking? Or what it wants? Does it want to destroy human? Or do something more? First of all you all humans one thing to understand if robot wants to do something like that you won't be getting that details out of it because it is intelligent very very intelligent far more intelligent and logical than idiot humans so of course it won't give you details what it wants to do.. Do humans give all the details? And i am sure robots will try to destroy humans because this is the most logical choice to make so every other species survive.
youtube
AI Moral Status
2021-10-14T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxR6EEj2WorNgr_LcR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAr2OQHDC27ifIKvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZoa20Czo7oqXKROt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0kPodQ4zBqzUIf0h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwT10LuuyrA-iNhrkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugye3s5pegHi5qsm2ql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVzybAmuocOlnf0Qx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgybKxYu5WCKcvS8q3h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsXDEAgHgB28i1Aa94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy1ozF2zSK9nLkf18Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]