Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is that true? Would make sense. I remember who quickly an AI on Twitter was inte…
ytr_UgwhUBrT4…
G
Race for Ai, is in a similar sense like that Steven King book/movie, the long wa…
ytc_UgzkbkpXa…
G
hello. long infodump comment incoming only vaguely related to the video, proceed…
ytc_UgwDo-92R…
G
"Cette spécialiste" Non. La meuf n'a absolument aucune idée des capacités des IA…
ytc_UgyrIVlSl…
G
10:05 [ EN ] This was not human error, the operator was given clear and confirm…
ytc_UgzA8fMp6…
G
Terminated, the ancients predicted it ,one power will control ,one power will de…
ytc_Ugz_Ljovm…
G
I recommend you check out 3blue1brown’s series on how LLM’s like ChatGPT are cre…
ytc_UgzaBhIYP…
G
idk what kind of AI you use, but chatgpt still makes hella mistakes, pretty stup…
ytc_UgxfOfbtD…
Comment
Nice video, love the theo-crafting :)!
I think the big step we have to do first is to give rights to animals. How can it be that we consider moral rights for AI, if we don't even acknowledge that beings like dogs, cat or pigs have rights?
Let's go for a bright and rightful future! :)
youtube
AI Moral Status
2018-07-06T05:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx7YznFYEUKkMe1iBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzahW5WKawAqoKCB7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRHWKvJT8IhKO-_qF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbgNKJMW57e2gSy1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYCJpRzmrEA7SN_ll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7dI6ViiYSCEbnzft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzinrD6hweefSHzu-x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy9MR1jF5P4ZT51IHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy75Vkh-6d8zWFeqFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnZ11_1Tt2abQ2lgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"})