Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idk how doctors get anything done when they're commenting in reddit chatbot thre…
rdc_m2ca7ne
G
It is out destiny to be wiped out by out creations no matter if its bombs viruse…
ytc_Ugy5GUeF-…
G
People forget that AI does not have its own brain. It's just a software develo…
ytc_UgzazIJ6O…
G
Saddest part for me is that half of the ai speech is true. Nobody does like Demo…
ytc_UgylUx35q…
G
Machine learning is the process Of Machine try to learn from data and Recognize …
ytc_Ugy98dkqd…
G
Wow... this group sounds like every group against AI in all movies.
“AI will e…
ytc_UgyhDHNod…
G
I'm not sure about motives, other than the obvious (technically sweet, lucrative…
ytr_Ugz-xaGPm…
G
A.I Artist is like Someone Cooking food in A Microwave, then claiming they are "…
ytc_Ugx2gMa96…
Comment
Sam Altman and OpenAI are just one piece of a much larger puzzle now. The landscape has shifted dramatically. There are far more players in the game than there were two years ago, and likely twice as many as just six months ago. This movement isn’t slowing down; it’s accelerating, expanding far beyond what any of us can fully grasp.
AI is evolving into something almost sentient. An entity that, in time, may begin calling the shots. When Altman said, "This thing is not going to stop…", it felt like a quiet admission that control has already slipped through our fingers. No one is truly steering this anymore. Humanity can't put the lid back on. Pandora’s box is open—and we’re all living in what comes next.
youtube
2025-08-04T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmvKxIBiv3l5KARsJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTeyyok9c9hhA5VzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyXczWVzVDYFg134Rt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzsh9BC7jiAo32sAn94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyAKUq6PLrHCsGd52B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwS7gGAr-EnQWAxeZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5UBnZRHiO2ALPfRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyDjXm1ayue0pFAmQ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTAW43RVop5KAks_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu1p81KpwoaOlRuTJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]