Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like how she shut down his “devil’s advocate” statement quickly. There’s nothi…
ytc_UgyUQDjsy…
G
Companies who profit from AI are those who sell AI to other companies that "want…
ytc_UgzDCqBfr…
G
Whether it's drawing, baking, playing music or any of the other countless art fo…
ytc_Ugydps5cN…
G
She could've made it more her with more samples and data. And use the voice they…
ytc_UgySudQLv…
G
The autonomous problem can simply avoided by flying high, having to confirm targ…
ytc_Ugz6RszFG…
G
All for realness l.l.m. or as perceived large language models are passed off as …
ytc_Ugwwe9Y68…
G
I feel ill, every day there is new news about sa and deepfakes. Korea literally …
ytc_UgzDWPHNG…
G
Science fiction writers of both literature and film have been warning humanity o…
ytc_UgyxT0St4…
Comment
This is a misunderstanding of how Large Language Models(AI if you will) works. It's a predictive text generator. It predicts text with a certain level of randomness. If you give it a prompt that includes sometimes saying something, then it will say that thing sometimes(because of pure probability) LLM's have gotten better at not hallucinating, but the prompt favors the word apple. AI's main driver for companies is agreeableness (because that makes people want to use them more, increasing profits), and you've already set a conspiracy-esque context. It's going to try to say what you want to hear.
Really, let's think about it this way. AI works on training data. If companies(and presumably the government or some shadow organization) wanted to keep this a secret, WHY WOULD THEY PUT SOMETHING THEY DON'T WANT YOU TO KNOW IN THE TRAINING DATA??? I could make a bot right now that was only trained on conspiracy theories, and all it would be able to talk coherently about would be conspiracy theories. This thought process fails on so many levels.
You don't have to make some shadow puppet government to be mad at when our government (I'm assuming US, but this applies to basically all governments) already has plenty we can be mad about.
youtube
AI Moral Status
2025-07-22T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4zH0wZrjDqxiT9TR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzqE9t5INBvzG3YMqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0qC5Ii-m3KFE3d-p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjpLxOCJPBQkHhxSl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8OjNBOeKJnu3eOr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAprZ0_F8_eZ0qeDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylEDZfRq3T0lbSPih4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsfGvk3C3dEsR5F8N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMKf4FVUuKzmpcwY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyoRubeXK5AoAUP4Np4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]