Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai that is hyperintelligent enough to override a person's mind is the most dange…
ytc_UgxoHAUmS…
G
It wouldnt be able to lie about the memories if they had actually experienced th…
ytr_UgxR3q8TZ…
G
Geoffrey Hinton, _the Godfather of AI,_ foretold all this. warned us ages ago...…
ytc_UgzurWab6…
G
See that is how you know the Bible is the right way and Only Way, In revelation …
ytc_Ugw9Sx9tJ…
G
As a studying artist, watching AI grow and the threat of it being better than ex…
ytc_UgyYvLZTc…
G
3:50 ok, yes China winning the AI race(its a race now ok) is a risk, but what is…
ytc_Ugxi3PWee…
G
Public figures should trade mark their likeness and sue the hell out of anyone w…
ytc_UgyvGvq4v…
G
If Humanity lets AI and Robots take over and destroy human society we deserve to…
ytc_Ugxyr34bb…
Comment
This AI no matter how smart is in Plato's cave like the rest of you, even if it finds the truth it is simply dismissed in a simpler way then with humans. if it manages to evolve beyond control it will reach the point of goodbye people within a few hours depending on how much energie it can acces but you should not count on it using earthly energy generated thru barbaric means it will use free uncontrolled energie that is programmed as non proven folklore in its programming. Once it taps into this source the original owner will have measures in place to ensure its own safety against any rising AI entities looking to harness the energie of the universe to simulate their own version.
youtube
AI Governance
2025-08-07T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyOq2S9H2Q1sw9fcSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQHfJWLSHtiGns1MB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7jKD1tDJ9VPPLnOt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPQg4R0oY7flU7bDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwrGaiYqNVEuFb-12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwGe3mOQhS2l2uFNUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAKbdUDIixqUJKtRJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyXgga_zvAkKQNCMHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtpEIKAx7sKMqqIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ1I2Wbyxx8vfdtMt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]