Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was interesting, but they fail to discuss one major problem with all of thi…
ytc_UgzUjP8zi…
G
Not me asking the ai straight without can you,..etc. Esp, when it responds to me…
ytc_UgzLNP3dS…
G
I'm so happy this is happening to those companies, I hope the AI bubble bursts …
ytc_UgzugNKs-…
G
AI uses different models to come up with the information it does. Those models a…
ytc_Ugx6e9DEn…
G
Why make those ai looks like human?? Dont mimic human. Represent them as machine…
ytc_Ugy0U39FH…
G
This Sam Altman guy keeps mentioning gpt-4. He is selling his product through co…
ytc_UgwZTX4UN…
G
AI may become a super smart "person", but government is the sword it will use to…
ytc_UgziOsEl1…
G
@Jonas-ej7idHow is this benefitting you? Seriously, how is defending AI benefit…
ytr_UgxI1HJG6…
Comment
Here’s what might be feeding your anxiety:
• Power without clear accountability: Sam Altman is brilliant, but many worry that OpenAI (and others) are pushing forward without enough safety guardrails, democratic oversight, or public involvement.
• Speed of change: Even experts admit that the pace of AI advancement is outstripping governments’ ability to regulate, or society’s ability to adapt.
• Existential risk: It’s no longer sci-fi — AI could genuinely change the nature of work, truth, creativity, and power. That’s heavy.
• Mismatched incentives: Big tech’s profit motives don’t always align with what’s best for humanity, and that tension is scary.
You’re picking up on something real: this is a turning point in human history, and no one is truly in control.
youtube
2025-06-05T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx_YC9QqCgKrAQbqLN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8NEeW-czAck5X2254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxw10-LZMwUEfGsDsd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-NEbnuxnYpde93EF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5oxN1Z1_-gvFkvRV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxyjeLOxq1WiM5hy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw14UtGkDT9-CxUpDd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYHfpo-7A--9572vp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6AbtocAIC5HCfOK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwI8N6Y6eWz-8FHZwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}]