Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It seems that the wise approach would be to not give AGI/SI the ability to direc…
ytc_Ugw8C__02…
G
Wrong, people accurately predict the future all the time, it's a job to do so. A…
ytc_UgxKVRIOc…
G
Trick Question. A properly programmed self driving car would not have been that…
ytc_UggO5i8Su…
G
To think of the movie Bicentennial Man with Robin Williams, we thought wow can y…
ytc_UgzEnWpRO…
G
Can you explain how an LLM, which just predicts the next word in a sentence, cou…
ytc_UgzB36QLX…
G
A.i. just isn't very good T conveying meaning or emotion. Also it kinda has its …
ytc_UgyQPwgxb…
G
Why are you acting like humans don't have an higher margin of error than 0.01% t…
rdc_oagg9am
G
@Mafon2 If you automate everything with AI, you lose all creative decisions when…
ytr_Ugz7jAmAx…
Comment
And what if your neurolink is contaminated by a digital virus...or hacked...our last freedom is the one of our thoughts..this can lead the owner of neurolink or some skilful hackers to be able to "read" people mind/thoughts (output) quite easily. Hence, if people are dishonest they might develop these kind of hacks. Very dangerous. Or if installed in the Nucleus Accumbens, you could render a whole population addict to the content broadcasted (input). Don't become the next Openheimer! Let the AI be the most intelligent and don't put humans at risk of losing their freedom. That's kind of the worst case scenario but you have to reflect on this...to make your neurolink safe enough for humans. Beware of pre-clinical studies on monkeys because you would quickly get "planet of the Apes". Proceed with extreme caution!
youtube
AI Governance
2025-09-16T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgykudEW0U8eHLPL4w94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgweBfPh4a4wU6JVkBF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFb26J8JlGn2mBn4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGrd-PxBulEPeThzN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVSW-dIXQv1V9M_jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0H4Nq-dk9ngkFPlJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyR9yJaxaMTNufBixJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7l4nG0SzFX_GKjp54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxWcjXLPMNTuQfOHgt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx2jWVdGQilILuYRB14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"fear"}
]