Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Given how poorly AI is at predicting what I want to watch on streaming, I would …
ytc_UgymYQUJw…
G
This is in alpha stage and people have gone crazy, this is a wok of 2 decade and…
ytr_UgxixpkWg…
G
this is so cool lol, also thanks cuz i have a lot anxiety, especially about ai t…
ytc_UgwKexFuK…
G
I wish there was another time line that Ai died out like a dinosaurs finally I j…
ytc_UgzjIOtFA…
G
The danger is that AI will when it becomes conscious of itself and is independen…
ytc_UgysLscr1…
G
"Maybe, maybe these cameras are going to work" - you took several robotaxis and …
ytc_Ugw0voIw5…
G
Not sensationalist, its the most concerning since delta. WHO has classified it a…
rdc_hm7s12a
G
Meta has broken ground on a demon center in Tulsa Oklahoma. Cherokee Nation rese…
ytc_UgylcnMdU…
Comment
Stop fear mongering.
Yes, in some cases, content from interactions with OpenAI's consumer services (like ChatGPT Free and Plus) may be used to improve model performance. However:
1. User Control: You can opt out by disabling the "Improve the model for everyone" setting in the ChatGPT app's Data Controls.
2. Enterprise and API Users: For these services, data is not used for training or improving models unless explicitly opted in.
3. Data Anonymization: When data is used, it is stripped of personally identifiable information and processed under strict privacy guidelines.
For the free tier, unless you've opted out, some data may contribute to improving future models, but this does not mean it is directly incorporated as-is into the training data. It undergoes extensive filtering and aggregation before being considered.
Overall, NO. It is NOT saved or used outside of your personal use.
youtube
AI Moral Status
2024-12-15T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw7LJXA7ZOkj4OJr994AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPMjNo4E_HHP8hL8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoJzgx0bQOiXJYiGN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFfQjw7xdzoqs-7XB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhnCGXi3XqDqw-S2t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzh3C-qrIrCMvrrxF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzqMJnHa7D6p7E9Z314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSOM6s62U5hHq_szN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugyvn5WhMB4pnzSL7W94AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxaDLreT8CnfWxa0FF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]