Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These ai companies want to scare you with the threat of Cybernet so they can get…
ytc_Ugwq0M04t…
G
Not the guitar band fixed on the bottom of the guitar with both ends. Girl that …
ytc_UgzDvLnWI…
G
holy stop glazing ai is all you do go make angry comments on this channel?…
ytr_UgyD7geu9…
G
I do use AI art for small things, like getting images for NPC or areas in TTRPGs…
ytc_Ugzlu4arV…
G
Ok Universal AI is not the way to go, but we have just as pressing a problem wit…
ytc_UgwXlE28q…
G
Maybe not in chess necessarily
But in Texas hold'em poker Yes the AI is cheati…
ytr_Ugx_S84AN…
G
I was already losing to 13 year old artist prodigies on insta and now you're tel…
ytc_Ugyzy6cIa…
G
My concern isn’t with ai taking my job it’s with the corporations and government…
ytc_UgzzRYv1B…
Comment
I agree that people shouldn't send personal data to AI, nor use them as therapists or friends, but I don't think the scenario of them regurgitating exact conversations to other users is realistic. they can't even stay on topic in ONE conversation, let alone recall it from a billion others in the data.
That said, the conversation is still unfiltered and in plain text in your chatbot, so OpenAI (or whoever) can still use that data directly to build an online profile of you. If you mention you like a certain brand of clothing, you might suddenly start seeing targeted ads for this.
So, I am concerned about leaking to the company, not so much leaking to other users, simply because AI models are just stocastic noise generators. Unless you use extremely specific language/dialect that only appears a few times in the entire dataset, the chances of "accidentally" prompting THAT specific workplace incident is near zero.
youtube
AI Moral Status
2026-02-09T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwJmGROr1SRpqc8hKR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnEHbYX63ixo73oG14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzqELBErOKSZy0pmyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhmCsUhYnRfBYvsnB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJ3UdB0cC-hd6RvdR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwz_12cg83YAz155wF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyOj5o0gyxQ6tsh10x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwg1sPyIghc9v_HZbR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVza7j0_tz6dBaBix4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPpiaBAUfsW13kRt14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]