Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There will be plenty of jobs just different ones. When AI removes tasks from wo…
ytc_Ugxw3eTlG…
G
I work for open ai. It’s not “tricking” chat GPT. Chat GPT is trained to never f…
ytc_UgyiNhEhA…
G
I like copilot, but I don’t want to waste lots of battery on copilot+ stuff…
ytc_UgwGp2YEo…
G
digital art apps made art more accessible. ai imagery and videos just a horrid c…
ytc_Ugw64ER5M…
G
Moral of the Story :
Slowly, Human are becoming the slave of AI😢…
ytc_Ugwafmcmr…
G
@cyborgLIS Have you seen a Waymo before I would think the lidar is on top of the…
ytr_UgxRdXJI3…
G
Who tf let Ai decide who needed care at hospitals? That's something we should no…
ytc_UgyeTu9ab…
G
I have no proof, but I'm sure we will never create human intelligence using LLMs…
ytc_UgwjZfZAE…
Comment
I feel like describing language AI models like chatGPT as having "hallucinations" where they "make stuff up sometimes" is far too generous to what they actually do. These chatbots don't know what's true and what's false, they don't actually _know_ anything. They're _always_ making stuff up - guessing what sequence of words is probable in response to any given input - and it's more accurate to say that they get things _right_ sometimes.
Chatbots will confidantly lie to you, but actually calling it a "lie" is a mistake, because lying requires knowing you're spreading a mistruth, which they simply don't. Because they don't "know" things the way we do. That predictive text output gets to be called "AI" is a huge framing mistake that only makes people misunderstand and anthropomorphise these things.
youtube
AI Responsibility
2023-06-10T14:3…
♥ 382
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzAHCAIJHmATJXQo_p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgjR-n5A4vfGI1R0F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwqA5Fj-OdDI2j3Tbp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzpgzT3PODn90fjsYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugx7rwzJIqSkQMcCPOl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzF2m7X5E0xDTQ-eAJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwaxrBFTIhPWphN7Ut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxjwO61X8DI_4TU8_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwE5tdhQkj3htfHlOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx6OQZbGTE-zPwq2PV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]