Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I tell u my story as a software developer i use AI to write code but AI will not…
ytc_UgySMLwDc…
G
AI IS TAKING OVER I CANT LIE MOST OF MY AT@T ISSUES WAS SOLVED THROUGH THE APP. …
ytc_UgwrWfzNL…
G
Wow, a story that has almost nothing to do with AI and everything to do with a h…
ytc_UgyToZz1E…
G
The ai movement will be fast tracked and destroyed by every program becoming nih…
ytc_Ugz2p_H50…
G
Man, this is a completely new level of being pretentious. I can spend days on a …
ytc_UgyDpHFo9…
G
Ladies get better at sex and ur oh I don’t like that lol bc this is coming and I…
ytc_UgzMMkQNx…
G
Open AI paid off the landlord wtf?
Why would act like that for somebody taking …
ytc_UgzmsgULS…
G
HUMAN ART WILL ALWAYS BE BETTER BECAUSE ITS EMOTIONAL ITS TRUE BUT AI IS FAKE NO…
ytc_UgzEnIem5…
Comment
We don't know how to make AI safe...because we don't know how to make human's safe. Sounds simplistic...but it is anything but. It's called alignment. It is actually the biggest question in all of history: What is a human being... and what does a human being ultimately want? What does 'safe' mean? The simple fact is...the vast majority of people simply have no idea what the answer is. And AI is 'trained' on information derived from the vast majority of human beings. It's no secret that people fk-up. Often enormously. So don't be surprised that AI will as well. There is a solution to this predicament...right in front of our faces in fact...but it is anything but simple.
youtube
AI Governance
2025-09-04T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgymQejqPY0vbczVQth4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxS86-05T9bvH3peid4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmlZbCzn2ft0Zrx2p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAqyEg4nVc8Xs-EAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8y7g_dCnDcJFTvel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyx6Q4hR4WAbLIT_zh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxw6sauUEBGK-vN9M54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFyGo1Wo7ZNdIQKcx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7Uu1cdJD8ppR_sv54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0-s_OeHg-zFymm7Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]