Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not a "silicon phobia". A religious person might say that a soul needs to b…
ytr_UgywBImac…
G
Reminds me of something that's most likely gonna be an issue ahead in the future…
ytc_UgxTN5r-_…
G
How is this even a Question... of *course* its copyright infringement.
You are …
ytc_UgzXNtwOv…
G
@Bakobiibizo I’m just too old for the novelty’s fun. I reached my math goal. It …
ytr_Ugwbn3rS0…
G
I want A.i. to do my taxes and stuff people DON'T want to do. Not my hobbies, ar…
ytc_UgwBwIqC7…
G
Art isn’t meant to be perfect. It’s perfect because of the hard work and imagery…
ytc_Ugz5Y2vEO…
G
This is AI right? you are telling me a drone strike causes no shock waves…
ytc_UgylOCbHj…
G
Modern chat bots based on LLM:s don't reason or "know" anything. They don't appl…
ytc_Ugy2gU0N5…
Comment
I think "sudden" super-intelligence is quite scary. Self-replication/self-improvement is indeed quite scary. Just high intelligence with a high level of "agentness" and lack of supervision is already reckless.
I don't believe in "sudden" super-intelligence, is because I believe it would require way too much energy right off the bat.
I'm not really scared of "narrow" (low-agentness) AI, even if it is intelligent. If the training data, inputs and outputs of an intelligent AI system is data, papers and theory in fighting cancer, is virtually impossible that system will go do scams on Ebay. That would be like saying a sufficient advanced version of Stockfish will eventually realize that it could do much better, if it had tons of money.
I don't see why by the time a intelligent (not super-intelligent) AI system starts to scam people on E-Bay, it will already be intelligent enough to be impossible to shut off. It would be extremely unfortunate for human race, if the first AI "scare" or "incident", is already the one that ends up in human extinction. But on the other hand, after AI incidents, it's going to be a lot easier to sell to the public, and humanity in general, a permanent ban on AI research and engineering.
Sadly, the less obvious the dangers of AI, and the easier it is to inadvertently build a humanity-killing AI, the less likely of an "AI banning" ever working. Even if every government agreed on a ban, the chances of a high-resouce rogue agent that doesn't believe in the risks developing underground is going to be extremely high.
youtube
AI Governance
2024-11-12T00:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxwDnlEHA7QFwMzrZB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwGPNiP4G115HlCMmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxgn2QDG4u3GwUCBPh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz431MRgmzceabjLdd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcbFmhgeHbLrPqRyN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-xpntgp4QxxIED5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwePVVbMUGmOuwAgch4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNv7S5t7BOv9eoxYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnNR89T2lV3e0tf7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIZrGwu4CUO899WoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]