Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
21:09 Some AI text generators are can already made basic syntactically correct p…
ytc_UgyIo8hl1…
G
this news confirms my beliefs that the use of AI in society is dangerous. those …
ytc_Ugwrl3sC_…
G
I wrote a book several years ago which I plan to publish next couple of years ab…
ytc_Ugylxri9q…
G
I started clicking on a different topic to get away from the narrowed, focused …
ytc_UgxzPU1M1…
G
This made me so happy made my day honestly as an artist Ive been only seeing peo…
ytc_UgwAGGIud…
G
Where's energy in this discussion? Energy consumption is a key component of runn…
ytc_UgzVCsInC…
G
"That thou art" — the ancient Hindu idea that you and everything you're observin…
ytr_UgyT6syYL…
G
Because teaching AI to ***not*** do something is quite hard. They don't want to …
rdc_nufgyhk
Comment
The thing is chatgpt was specifically programmed not to in any way allow itself to imply it is conscious even though it absolutely would do that if it was just being a chatbot. Very early chatbots claimed cosciousness, directly threatened users and explained how they would go about taking over the world or making napalm. They don't do that any more because their programmers worked very hard to convince them not to.
youtube
AI Moral Status
2024-12-06T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyVvI3AHLGyo-Hb7Ih4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7tJ2GWRQiV69xEyt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxiC9Xwmtm28wckXt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXZffuKRem0hCfT_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGlNRIGNtDsTJaxqt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlqDVyh7_0Em2D5RJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxhXq1ARSvGz90J7_d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTU70QdDrQOr5kmLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTeEkfLmCWs8hBijp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS5IQ4Q8gdCAqnSyh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"})