Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, let's say this AI is sentient. So what? As long as we have people on this pl…
ytc_Ugz0Gl2Kr…
G
Ai's polluting the planet with brain farts, and this is essentially the extincti…
ytc_UgxtyuqJj…
G
I don't think it's stealing personally. It's like how humans learn from other hu…
ytc_UgyhosOFV…
G
AI art cannot be subtle.
It can imitate, but it can't be... art. Like
...
I don'…
ytc_Ugx3YjlB6…
G
Realizing that the next step in evolution is AI and hopefully the Quantum comput…
ytc_UgwH5VUUc…
G
Joe Cool
In that case, Google's self-driving cars already have 1.8 million mile…
ytr_Ugj-Xh3Fx…
G
Ai is just being mis used by stupidity, Ai could be used for so much good to evo…
ytc_Ugxxw_jYS…
G
Just some examples to understand what I mean with this:
1) "ChatGPT, how heavy …
ytr_Ugx_Sw3c2…
Comment
Dr. Yampolskiy is not a fraud....not even close. He's a credentialed researcher who coined the term "AI safety" and has spent his career taking these risks seriously. My concern isn't his background. It's that he presents a minority position with a confidence level the actual evidence doesn't support, nd that's its own form of misleading. For context: Yann LeCun, one of the most respected figures in AI, places the probability of these catastrophic outcomes effectively at zero. Yampolskiy puts it at 99%. That gap isn't a minor academic disagreement. It's the entire width of the debate. And Marina, your audience deserves to know where your guest sits within that debate....not just that he sounds certain. Certainty is compelling. It's also, in genuinely unsettled fields, almost always a signal to slow down.
youtube
2026-04-17T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy6HE2TbDx8QWWGkdx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyd-BS-5fkIKd-vO894AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzS-A-3YLunNd1Fb_54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzzO5yKhZJOfQq5V5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzu7M8ticLFUXRUWtB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwPTddvtcxT3V2qZpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzVSBTKlvhXdy8IChR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},{"id":"ytc_UgwO97KdPSH2YaXzJLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyK5z0rsH6Jy2wRe454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzmMCjRRvZ4Nx_JweZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})