Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if you make poison they will make an ai that analyzes the data for poisoning and…
ytc_UgyrmnlZ4…
G
I haven't read the author (I think Greene?) contrasted with Dawkins. What exactl…
rdc_djzgg0e
G
Its already too late. The AI toys the public are allowed to play with, I imagine…
ytc_UgyOdopch…
G
Using algorithm like that to decide on someones future is pure madness... So gla…
ytc_Ugy_9N1z0…
G
Grok wasn’t “evil” because of a default setting or an accident. Musk set out to …
ytc_UgzA6kQ9L…
G
Dont worry 😂😂 self driving vehicles will need mechanics and technicians when a p…
ytc_UgyFMdMYC…
G
Other than some funny cat videos on YOUTUBE, billionaires want AI so they can us…
ytc_Ugxk27F8k…
G
i see all these videos about how ai will take jobs and stuff but i dont see any …
ytc_UgzkF0pHO…
Comment
If you teach a dog to harm people, It will.
If you teach and show little kids how to behave. 80% of the time, they will end up doing exactly what they have been told or shown...
Do i need to go on? Ok..
If you keep teaching A.I, on the question. "Will you destroy humanity" "will you k1ll all humans" and so on and so on.
What did the 80% of the kids do? they became what they were taught. What does a bad dog do?
Was that simple enough for you??
It will do as you teach it! So if you let people give it the information on who to destroy. It will.. If you keep making movies and shooting games and ext.
well.. you're the architect of your own self destruction...
Good luck to you'll!
//The Visitor 👽
youtube
2025-06-09T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyUSHjaPnFSQElc6JF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtpWPTEy3SqfdJsdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykTtgu31zfbrivDKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwiAxH2kBRjib5mvbN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZ6h6lM3nPhhLF4qN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwiXRCXn9XeCYE98Sx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzhnbDSVbaonn1FeLZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqNgPBHrna8raQjZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy27JjmERhmZpfa_zl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxXvl7E5s7Ktrd3oR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]