Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Still, using AI at all is having a negative impact on our planet and our society…
ytc_Ugx6CfmhN…
G
If you say the quality problem is just a problem waiting to be solved then so to…
ytc_UgwJQG-A0…
G
Every clueless person has an opinion on ai and no one ever has any proof for the…
ytc_UgyP3LTup…
G
It is impossible. They don’t have the power grids and infrastructure to automate…
ytc_UgwUenNZ2…
G
That's great! I'm always very polite and respectful every time I use AI. Now I f…
ytc_UgzqrLdSU…
G
We are the most ridiculously stupid beings to ever exist. We are creating our ow…
ytc_Ugwd08FdM…
G
Lmfao! An expert says it today. And while the smartest people saw it happening i…
ytc_UgzfSZqjM…
G
Gang people aren’t born with talent 😭💔 Some people just work more at specific sk…
ytc_UgwqEfXMJ…
Comment
50:35 The "attention grabbing " argument is f** killing me. I spend a lot of time on the web in a quest to find the truth. That means digging through heaps of very different information. So grab my attention all you want. Give me more of what I seek, come on!
I dare you!
The only real problem right now is how do we stop it from spreading false information and poisoning the pool. That's all folks.
And I have a simple quick idea about how to do it. Make absolutely any output of AI inadmissible as evidence with only one exception - when proving it's use from a human for causing harm and then educate people about what that means.
youtube
AI Governance
2023-07-02T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwaYFiO6DHn_Jo7aB14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy0Ffc29mOsErW4-zJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6K5DlipRQ3e0PoKx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyOIfTn4tJR4SzmKzJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwSxYqdN3qWu6te49x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwk2Sxns01avasl89t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3Ec1rzZJLNQqKI1l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0qFOQJF8aFuqihmh4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCy8Csqdv7wObbBQl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkyBDEAf-8W1acQ5t4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"}
]