Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5:15 The only reason there is a double standard is because people know that in m…
ytc_Ugy-99Oa8…
G
Mixture of Experts is also a concept that predates LLMs as we know them today. T…
ytc_UgymriO1c…
G
As an aspiring doctor I do not want to study for 15 years and end up in debt jus…
ytc_UgzzRbuLN…
G
It's a moot point, too. Humans are nothing but an algorithm that learns and take…
ytr_UgxiEicFD…
G
So we have birth rate plummeting but wait a minute AI's gonna replace billions o…
ytc_UgwuvMeYz…
G
Interesting no one mentions the bubble is about to burst, ai has been a fad, bes…
ytc_UgwZBdIr2…
G
bro Nividia replaces you in seconds WSJ. EVERYONE.
No one's regulating AI. Eve…
ytc_Ugxf5aUL_…
G
Ig my confusion is if every conscientious being wants purpose why would Ai remov…
ytc_Ugx_CIJ9q…
Comment
Finally something that I actually can talk about because I’m fascinated by the topic: people saying ai lies.
I still don’t really believe in calling it lying because like. It’s a language model. The computer literally has no idea what it’s saying.
Basically take the thought experiment “The Chinese Room” for example. A person is trapped in a room with books in Chinese and is told to write appropriate responses to the slips of paper slid under their door. This person doesn’t speak or write Chinese, but all the slips of paper are written in Chinese. So they look for those symbols in their books and write the responses they see.
But obviously they don’t know what they’re saying. And the only way the people outside would know they’re not fluent in Chinese is by knowing what is going on inside the room or seeing that their responses are odd.
Chatgpt and other bots are the person inside the room, albeit they go through their books much quicker and will make up new sentences based on all the data they have. But they just. Don’t know what they’re saying. So it feels wrong to call it lying. If I meowed at my cat and he thought that meant I was about to feed him when I wasn’t, it’s not really lying because I didn’t know what I said. It’s on the shoulders of the consumer to understand that the program has no way to differentiate fact from fiction.
youtube
AI Responsibility
2023-06-10T22:0…
♥ 65
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwdmKCnvQT0JjAa-zN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWZ2rs2CqHjrt4BEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwy7KkrxnlZ-hlNYS14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyMFfYsT5hfjJfYcAp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwchx_uwP6GCZ7cNeF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyyZkQyiryDhtRG-Xx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgwQb-itbfyMAHrzgpt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugymn0QIKz6ogckY4Tx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgwUXoXj-NsPx9H0G414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy4K-VVV4TSu0CWJnZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}]