Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I guess a computer has nothing to fear in the after life, so it has no moral com…
ytc_UgysYzHOZ…
G
Despite knowing the dangers of the evolving AI, if scientists at places like Sil…
ytc_UgzzcMi5V…
G
Humanity is not fixed, nor is it bound to natural evolution. As we take control …
rdc_neq0ntu
G
Can AI save the planet? Can AI reduce the threats to planetary health? Probably …
ytc_Ugzt1Y3ZA…
G
AI wont take jobs away, businesses will soon figure out that a Human + an AI is …
ytc_Ugwsf1QqW…
G
Musk and others are doing a ‘shake down’ on humanity! Who wants a creation that…
ytc_UgygAWaIM…
G
Now that I think about it, an AI-"Artist" is like a cheater in a multiplayer vid…
ytr_Ugx889Iku…
G
@smile_1D10T_discontinued Hey, I am also in the TTRPG crowd, and though I perso…
ytr_Ugxtuo0Pp…
Comment
Suchir Balaji was an artificial intelligence researcher who worked at OpenAI from 2020 until August 2024. During his tenure, he contributed to projects involving the collection and organization of internet data used to train models like ChatGPT.
In October 2024, Balaji publicly expressed concerns about OpenAI's practices, alleging that the company violated U.S. copyright laws by using protected content to train its AI models without proper authorization. He argued that such practices could undermine the commercial viability of original content creators. Balaji articulated these concerns in an essay titled "When does generative AI qualify for fair use?" published on his personal website.
Tragically, on November 26, 2024, Balaji was found deceased in his San Francisco apartment. Authorities initially determined the cause of death to be suicide, though his family has disputed this conclusion and is seeking further investigation.
Balaji's whistleblowing has intensified discussions about the ethical and legal implications of AI development, particularly concerning data usage and copyright laws. His death has prompted calls for deeper scrutiny into the practices of AI research organizations like OpenAI.
youtube
2025-01-16T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzDb4n7iBrVpoIeM1h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuLUFuFprYtMyEb0J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzhigjzYV-FxDPcneF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzWWy3USrI36emfOXV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsVu0yPC97jDB4vXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzHO0uvSHR2O2o6qUJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgynUUSxPB1L3FZDwIt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyvPItpPSjGxt1jZMV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2RKYUuBZQogVJVSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxNFyF_PEWyB1g0rIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"}
]