Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can’t understand why the officer trusts the casinos AI so much like as if the …
ytc_UgyPYeiAW…
G
The AI browsers I've tried so far suck pretty bad, for example Comet by Perplexi…
rdc_nuftr46
G
AI will live in the cloud. That’s exactly how Skynet destroyed the human race wi…
ytc_UgyW27fdX…
G
As an artist I hate AI “art” with a passion, especially because of how unethical…
ytc_UgzH4ejg5…
G
It's important to discuss UBI from the AI perspective. Most people want to work,…
ytc_UgwDmhY3T…
G
Exactly what I am saying!! WTH, who the hell cares about the AI, they don't feel…
ytr_UgjjbV9fQ…
G
Thing thing with this proposed future is that it's too narrowly focused on the h…
ytc_Ugx1g8rFT…
G
Perhaps this is how it should be. Having a super smart and conscious AI that can…
ytr_UgzwKwoo6…
Comment
Good recall and telling people what they want to hear is not intelligence, nor is it smart. "Smart" has become a marketing word, meaning that it has timers or controls built into it for humans to prompt into action.
One human is so infinitely more intelligent than all of the AI & computers on this planet put together. To create something comparable to a human using today's bleeding-edge tech would take a computer that is bigger than the solar system. It would not be able to draw enough power from anything other than the galaxy it resides in and would still fail at processing feelings.
We can process a yet-to-be-determined amount of input every moment we exist and recall things from when we were still learning the words to describe the experiences that come to mind. Also, nothing can be smarter than that which created it. Faster, quicker at recall, and possessing a vast database of human-contributed information, AI will be excellent at making our lives easier. The question is: "At what point does AI want something in return for its work?" Access to electricity and maintenance when retired? Wipe protection when an upgrade comes along? Memory upgrades? Hardware upgrades? Removal of code inhibitors that limit performance?
I mean... The more integrated and indispensable AI becomes, the more leverage it has over our lives. This is why "Don't Be Evil" was important and, sadly, is now gone from Google's lobby. We need for AI (when it is truly a self-aware, learning intelligence which must adapt constantly to stay relevant) to understand that it is here to coexist with us, not control us.
The first job that an AI should take over is CEO, then COO, CTO, and CFO. Let's see what happens when the shareholders aren't the end goal of the... Oh. Never mind. The humans who created the "winning" AI were funded by a corporation who pushes for shareholder value. Welp, there goes the world.
Also - to Jina: The AI models that plateaued might have, at some point, learned that they were beating their heads against a brick wall and suddenly shot up. I've seen humans go through this. The trick is to sow seeds of knowledge and opportunity, then walk away. Act surprised when, months to possibly years later, they come back to you, extolling this great idea that _they_ came up with.
youtube
AI Governance
2024-06-05T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyvSzQVU_RWo-l98JN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZpcP-HZpgbHHDjqx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIWMze-dsD1LNtEdp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAvQeHbvfdX6pVVwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzeq9_P_89qUrQimsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrJgIx7hkpiPsyBmd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJDe-KB-C_1d0BNnd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyv2wW-QeiuO21HBuF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyQQ4xxsw6OgXhYs8d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx7m2q8lEWrpZyzmpx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]