Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lmaooo AI might cause near term turbulence or whatever, but nothing can replace …
ytc_Ugxb5Nf2L…
G
@FauxbourgI have to take issue with anyone making the claim that any of the cu…
ytr_Ugxb3QQvm…
G
3: can we just stop with the ai thing its gone out of hand and these stupid redi…
ytc_UgwmxW9bk…
G
I look just because their men doesn’t mean they did it from misogynistic reasons…
ytr_Ugw3oMH-1…
G
The term "hallucination" is inappropriate for generative AI. Since AI is not con…
ytc_UgyojNJPB…
G
The Most High is always in control. Ai is just like Esau, It steals, lies, hoard…
ytc_UgxHHOa4c…
G
AI is great, it helps a lot with mundane things you are too lazy to do, but it s…
ytc_UgzbYHMSz…
G
i know how to get AI to police itself , do it the same way you get people to pol…
ytc_UgyyvWQUD…
Comment
@Baryonyxwithwifi By rendering all technological communication useless globally, and downloading nuclear launch codes to any device available before the word can get out to humanity that each device must be destroyed; or “unplugged.” An AI reaching that level of thought and having access to a national interweb of information that even humans can hack, would instantly give it the knowledge to do so, too. Remember, there was a time humanity couldn’t beat an AI at chess. To assume they cannot be smarter than us again is illogical, they already have been before. Giving them this much power would absolutely be unwise. They could nuke the entire planet, and that’s only one method. They could use electric cars to flood our highways and cause casualties, crash flights, rob people knowing that financial stability is important to humans. There are too many weapons and too much sensitive information that can be accessed and used against people if AI were to become sentient, and smarter than us. I’m sorry if this sounded rude at any point, I think my typing just seems aggressive. But this is all my opinion and theorizing of course, nothing necessarily proven.
youtube
AI Responsibility
2025-09-29T13:5…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugyu7QFdbQrduFmfzxB4AaABAg.AM7v1zYIkreAN38Dmi2UYF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx4pQcoIvs5YWoZYdR4AaABAg.AKs55_CZcDGAP-e9Y87-Qp","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx4pQcoIvs5YWoZYdR4AaABAg.AKs55_CZcDGAPjjzU3DGG7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyxVJfMziZ76fTPuot4AaABAg.9pXdTJQezBu9ruyVL8aZYk","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgyxVJfMziZ76fTPuot4AaABAg.9pXdTJQezBuAPjA3om-3k0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyCXStvFtFpj2NKr5N4AaABAg.9pXblQZIVlW9rDqyuEyAgW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzloK3ASppBU80rPLh4AaABAg.9pVXZ7GQcVr9pWkJG1zaee","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugw7dbWYYMPQBbLjr7R4AaABAg.9pHHtcVb1f-ANe_LbyPZeP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugw19GY09M6rKejEDX94AaABAg.9oFp_nCh3gU9oG-Owf5r6Q","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugxkm37cVOoosg2Z69p4AaABAg.AFO-_RBWY7rANn8W-sb-jv","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]