Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Crazy thing is that its not actually A.I., its an Algorythm. Machine Intelig…
ytc_Ugya8iqwn…
G
I think its somewhere in the middle as we continue to learn how to collaborate w…
ytc_Ugx8_EkyD…
G
Oh, I thought you were going to say AI might not replace your job because AI doe…
ytc_Ugwie6ai4…
G
It wouldn’t have to be handwritten. Most colleges already have testing centers …
rdc_nu1no9c
G
The answer of big AI tech will be to get rid of "superfluous" people. Have no il…
ytc_Ugy9jYO7r…
G
I’ve been waiting for this for a while, so now people understand this AI stuff i…
ytc_UgxHwtYs9…
G
Pero si le das un contexto errado la IA puede fallar y fue creada pro el hombre …
ytc_Ugym_82bl…
G
@sorituanasution1180 Sam is. Who else? He doesn't know how AI works and is tryin…
ytr_UgyWKqHUz…
Comment
This is an illustration of how *language* is an imprecise mechanism for describing ideas. English is limited in how it expresses logical expressions. Not to go outright Jordan Peterson here, but the kernel of "truth" in what he bloviates about is that English lacks precision and accuracy in exchange for narrative and variety. Tokenization of words to train an LLM is a lossy compression algorithm subject to entropy, just like reading a textbook. It is very similar to how the cerebral cortex functions. The discovery of multi-modal LLMs that effectively can tokenize all kinds of input, not just text (for example, sound waves in the case of Alex's interlocutor), and produce a natural sounding output is a discovery of the same magnitude as Bernoulli's principle, if not Special Relativity.
youtube
AI Moral Status
2024-07-26T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzwTcSDfIuhOf5aaf94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyJ7IcGzEuTahTlte14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgzGk4877cDq5jHOwed4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugwbz1x8plI3NLjajpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugy4HIxGbAOS0FPJqDN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx6cKwruotD1_KAJBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxww1BeWCSZBqSE7bN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy3iPRba-zBBKgQYWR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxGmG5SzOlGrAyvm6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyqV_odsU8OlglTsPF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]