Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just discussing this as a theoretical possibility shows how far we still need to…
ytc_UgzMJrYEp…
G
In February this year, I attended a conference on AI, an event organized by all …
ytc_Ugw5rawBx…
G
Can all of us breakers commit to no ChatGPT or sora or anything of the sort? The…
ytc_UgyQIhw3n…
G
Obama is gonna merge with AI to rule the 🌎, very soon.
Musk wants to rapture …
ytc_UgzJJ6wL6…
G
@sagestrings869 What is good art is subjective… AI art isn’t even art, how can s…
ytr_UgwhNz_M4…
G
Anti AI Karen’s wins because AI wouldn’t exist without humans AI literally copie…
ytr_UgzufZalr…
G
I think an interesting question to ask fundamentally, since we are the ones fuel…
ytc_Ugzv7vYpW…
G
Easy tiger, with the God like AI mumbo jumbo. A computation of an awareness is n…
ytc_Ugw2VxjUh…
Comment
Senator Sanders - I'm a Vermonter and have been in the process of writing you a letter to address these very issues. It is so important that these issues be brought to light. But there are other impacts that are potentially more dangerous and more likely to happen on a shorter time horizon: 1 - Limited ownership of knowledge dissemination. Just a few companies will control almost all of the knowledge landscape, and will control (with the use of algorithms) what information will be available to you. Censorship will be invisible and widespread. 2 - The rapid increase of 'AI Slop', or generated fake news stories and scientific articles. Making it increasingly difficult to determine fact from fiction, allowing for increased polarization and decreased critical thinking skills. 3 - Soon, these companies will be more powerful than governments, and will have the ability to control government discourse, and public perception of elections.
youtube
AI Jobs
2025-10-08T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxCJLo9wXUapZjotH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz82fhaGayVF9WWf614AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxr1jjsLzGwDcavyf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5weRlLwTydxA-56p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwhdn-Ipr9S_k8pBvF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVFS9CL5xZnEd6HLJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzh6CJw5dSO6u9kAhJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNG40aD6RDfPoHYjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxLEwpzdfa-I4BWisl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9XQdCa9ldKEltpet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]