Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alright my littlr artists - i bet you all use chatgpt for example for coding.
I…
ytc_UgxCz3DXh…
G
A fake artist complaining about his false AI art not being seen as real creativi…
ytc_Ugx5T75UQ…
G
FCC and FDA have not done their job in decades. They are a joke. If you are goin…
ytc_UgwPMAjUq…
G
training on human data may somehow be limiting to the AI not to be smarter than …
ytc_UgxP3KFBR…
G
AI in distributed radio networks for sending information to brain and body is a …
ytc_UgzbdC945…
G
Google is worried about predictions that ChatGPT will devour Google's traffic ov…
rdc_m27std5
G
This is the next nuclear bomb except there was no time between when we mutually …
ytc_Ugz8gwjZ3…
G
Should you be allowed to copyright AI generated art if it's trained on a model c…
ytc_UgxsDA3Cy…
Comment
I worry that a lot of this discussion about AGI and the tradition of "AI Doomsaying" is just distracting us from real conversations about how to deal with the current problems involved with the actual technology we have now by focusing solely on the technology we might some day have in the distant future. We have a lot of real problems and it's interesting that a lot of the people most vocal about AGI are folks like musk and zuckerberg, vs the actual engineers working with or on the algorithms these systems use. It's definitely in OpenAI's best interest to inflate the actual capabilities of these algorithms and avoid talking about the real social, economic, and ecological issues presented by the real technologies they have. The fear of AGI is, given where we are currently, at best speculative fiction.
youtube
AI Moral Status
2025-10-30T19:1…
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyRuf-LEeHJOhQFY594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFDND29OHmoAutKMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpILv5R-BooW5l-JN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXJuM1xzfGxnjZhz94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyNlIsw1wA0yT25VOF4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlBfVxaP9ZumWOZJ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZtvGMWRhoEpeBmQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymJuLeNqo5aW-ONxh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwBNpC2KA_vsXcfm0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySpcrzMTMwLMJkIQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}]