Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Still convinced that hallucination is constricting ai to a domain of having answ…
ytc_UgymRBJgM…
G
Clearly a smart guest but typical European solution to social media/AI is to reg…
ytc_UgzaLja8M…
G
Cool this is all 2 to five years old and I’m pretty sure everyone here only know…
ytc_UgxB3Y94b…
G
"An artist has no right to speak to gods".
Gods as random dudes that have no tal…
ytc_UgxJ3TK-Z…
G
This will eventually bite everyone in the ass. The problem is, you need humans t…
rdc_m273rdi
G
Hey much like ai scripts your talking points have a limited number of permutatio…
ytc_UgymhTCDh…
G
>It's not even close to the same thing and I don't think Tesla denied the iss…
rdc_dcvcedd
G
Agreed. Anytime these CEOs are interviewed saying “AI is dangerous” is actually …
ytr_UgxlYKASO…
Comment
I think most effective way to poison AI is to program a virus that makes AI spread virus and then freeze forever and burn down the server
Like code that makes AI resend virus and then stuns it with logical paradox
Or
Make a drawing that contains logical paradox, and when AI tries to copy it it's CPU explodes
Like in the end of the day AI "sees" only in binary code
So its possible to code logical patadoxes into collors or something similar so AI reads it and dies
Or...
Transform 55 yottabite zip bomb into the picture and let AI eat it
If there are programmes in chat correct me if I am wrong
youtube
Viral AI Reaction
2025-01-20T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYPnWcmwP2mmU3oT54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYwzw1xefQ7Ri_AOt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxrVEtoUZAVR_BiHaV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyygNre4fzwlMrZ0yB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyl24Vqzy3UnxKrFBx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRgwVnEziLpPrAMkF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcA_KJXuQBzF3o4f54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxx9h8ZofQ52HOXO0h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyAHGa383h83LgPsoN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx3WnCze6BfUL3m12N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]