Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s not a bug, it’s a feature. If people can’t think for themselves and use AI …
ytc_Ugz8tw11f…
G
We already can't "read it's thoughts". Having its own language doesn't make sens…
ytr_UgzFdwW5f…
G
I'm truly curious. Once AI is integrated by all of these corporations due to gre…
ytc_UgymTBLTy…
G
If someone asks ChatGPT for information on assisted suicide in a certain area, a…
ytc_Ugz3I-ocY…
G
While it is true that some artists felt that way about photography, the truth is…
ytc_Ugwes2Ib4…
G
To be clear, I don't have a stance totally on one side or the other. Right now, …
ytr_UgziCg8uV…
G
People seem to forget that ss power and relatively easy as AI is, it's not withi…
ytc_UgxMEE-ub…
G
Ai art is closest to a poetic form of art that, let's be honest, still needs lit…
ytc_Ugw9LlpZ9…
Comment
As an AI researcher I hope to bring some awareness: the term « chatting » or « speaking » from a chatbot is literally ridiculous ! A model does not speak, it outputs the most probable token from a vocabulary on the data that it was trained on ! So when they « speak » they actually DONT they just guess the next word to output given the previous one.
Like if it can help, you can visualise it that way: imagine asking a machine to take a book, and you would ask them to give you the next word after « hello » based on occurrences in the book. They might say it’s « my » then you do the same thing again and they output « name » and etc etc until you get « hello my name is Alex ».
It is EXACTLY the same
youtube
AI Harm Incident
2025-08-16T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxyAW22y72_dh9lZZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxQ8vq10J0JKcdqVg14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKLpHOfdcpxpPsqmt4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz34RX2QpB9vqC21ed4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGlPhcz4Br_UDvVpp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGJ3yVZLsNPq-TNUR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwqPrpV8VGGU8D69kt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgymD_3mo-eARXFkp2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwfPCnC_ZEJovyW9EJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgwvffxlInM6FuqxP5x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]