Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@vladaowo6367 this whole debate is pretty redundant and unnecessary. There are p…
ytr_UgwHsIwuL…
G
This AI lies a lot AND will admit it. It will claim anything, doesn't mean it's …
ytc_UgztHBZEg…
G
As a woman, I worry about how those AI romance bots teaches men that we will alw…
ytc_UgwTPMPTw…
G
It's not necessarily a race, as we have been hearing. China and the U.S. could b…
ytc_UgxUJoOp3…
G
She was thinking hard when he asked if Ai thought we're in danger of it happenin…
ytc_UgwnMAdAJ…
G
What is Wrong if AI Wants shows there’s Artwork . AI is Not Scam but Just Tools …
ytc_Ugwhv49Wp…
G
Computers and printerd destroyed a typewriter's job. Keep crying about AI. Stop …
ytc_Ugy4tzzS9…
G
Even if AI turns out malicious, I still don't understand the fear bc it is limit…
ytc_Ugw7X3LJd…
Comment
Amazing coverage, John. Powerful essay at the end - I would suggest clipping just that part for easy sharing. I think what you found to be the case with AI ethics researchers is also what those who have shifted to PauseAI's direction have found: The general public has a better grasp and more correct attitude towards AI than the average engineer. Many engineers fall in the middle, being knowledgable enough on the technology to think that they know better than the public, but not knowledgable enough on the topic to be able to recognise the level 1 counterarguments to their position. Leaders of the field get it, and people like Stuart Russel and Geoffrey Hinton are great assets to our movement, but CEOs and lab employees only get it as much as much as their continued profit incentive allows them to get it - often saying the right words but not having their actions match the issue's urgency. I myself am a PhD student working on AI safety topics and am often incentivised to work on these more established and tractable non-issues like the AI saying a curse word, over the real problem deserving our attention.
youtube
AI Governance
2026-03-16T14:2…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwv85jUqLdnvC1RcH54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwhAO9rQm0-5ljm34N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz9Cjj0ZG2G7puDoLd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWtG54ym81l4csruN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyCG4lla29lY7MFQod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyxfZb5O9s6c5vHuV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEfIt3A5BaJ1WUz154AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwcE55bOBxHr95Vl6h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzSKdEjnC9PL1tdivh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWbKEFUbM-iEPjtcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]