Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People can and often do complete tasks without fully understanding the why. Doe…
ytr_UgjY_cocR…
G
The skills that you described: understanding the problem, discussing requirement…
ytc_UgxOzkr_o…
G
It sounds like in 2b you say that ai is bad because capitalism is bad.
Actually …
rdc_g0x4nfn
G
When a human creates an art, its from heart, its a human showing human stories i…
ytc_UgzpA4l1g…
G
i mean i agree with you but ai art isint very bad anymore(quality wise). have yo…
ytc_UgzoFOtGr…
G
Choosing between Thinking And Spoonfeeding, is confucious. Ai has to be strictly…
ytc_UgzFUat7a…
G
Ai creation's can be used as refrence for art blocked artists but eehhj its not …
ytc_Ugxs4Hiro…
G
"the common goal of many technologists and ethicists: to steer the development o…
ytc_UgzeJLa2a…
Comment
Mr. Altman, while you warn users that their private chats with ChatGPT can be weaponized in court, it is time for OpenAI and the tech industry to face accountability. AI conversations are becoming an essential part of people’s lives, whether for support, education, or advice, and must be protected by clear, enforceable privacy laws.
Users deserve the same rights to confidentiality here as they do with doctors, lawyers, and therapists. It is not just about tech innovation anymore; it is about human dignity and trust. So instead of quietly collecting and exposing user data under legal pressure, OpenAI should lead the charge to legislate digital client privilege, data sovereignty, and strong user protections.
Privacy is not optional. It is a right. If AI companies want to earn user trust, they must respect that, not just warn people after the fact.
youtube
2025-07-30T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmvKxIBiv3l5KARsJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTeyyok9c9hhA5VzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyXczWVzVDYFg134Rt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzsh9BC7jiAo32sAn94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyAKUq6PLrHCsGd52B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwS7gGAr-EnQWAxeZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5UBnZRHiO2ALPfRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyDjXm1ayue0pFAmQ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTAW43RVop5KAks_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu1p81KpwoaOlRuTJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]