Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Definitely support,
AI is horrible,
All the videos you see, the photos you see, …
ytc_UgwtXfzKc…
G
I don't have a problem with AI. The problem I have is that we use it on the dumb…
ytc_Ugwpt-vuI…
G
RIGHT!? That was probably the most comprehensible breakdown of AI modeling that…
ytr_UgyAJVJSy…
G
Not a bad name for it. It's the mentality of "it's their problem not ours".
So…
rdc_degh0py
G
AI will never be human, will never take over humans, it’s just going to be the n…
ytr_UgxtAgNsX…
G
Wow, 10 comments in and I swear no one read the article. I was just listening to…
rdc_jkff6n3
G
Then prepare for disaster because ai will likely make mistakes. Even grammarly p…
ytr_UgwgBnPs3…
G
AI is part of the Vulnerable World Hypothesis. Right now, a savvy enough person,…
ytc_Ugy2dnUF9…
Comment
Did any real AI developers/researchers signed it?
I mean. From my expertise, I can tell that GPT model is nowhere near and never ever even theoretically can be close to human or w/e is scaring those guys. It doesn't mean that OpenAI cannot create something that will be. But it would require completely different approach and technology. Improving GPT model will never get you there just by design.
It may make some people lose their jobs tho. As did invention of cars, calculators, computers, radio, television, music recording, and many other things.
youtube
AI Governance
2023-03-30T08:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxX2pPNAbMKGKgQCl14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRcVwtvklRr6KngYp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQarZaINcIk-PKD614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGQbCUlRP0JbtNyzh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7Gg7K7o3RfMufPat4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxun2f0vo1K3CnxY814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_h-xYKkgUh_gwe0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyoSgj94SfR_2aUzpV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKuE72K413BLUrQax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyiUE7lPXQZWT3HcLZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]