Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will be taken over by fallen angels, demons, and evil spirits. GET SAVED :)…
ytc_UgxAuLNSb…
G
Crypto? It’s not being adopted and hasn’t provided a use case other than creatin…
rdc_j1xkya0
G
put an actual human with a problem on the line and try again. also, it‘s so impo…
ytc_UgxcAP0DN…
G
My theory is that they are programmed to do these things and when the upcoming B…
ytc_UgxuTjDL-…
G
It's actually insane what open source models can do now. This is LTX 2.3
https…
rdc_ocpw8i2
G
if a human crashes a truck. people say "yea well it happens, they are human". If…
ytc_UgxmMkcus…
G
“Employers will use AI as cover for layoffs that they would make otherwise.” -Ga…
ytc_UgxIc50kI…
G
Ai is just gonna be an advanced calculator to the normies…..because of our gover…
ytc_UgyUz_lnb…
Comment
I think the danger in AI will come from the absorption of satire. People have a hard time in telling if other people are being serious sometimes. So if AI can pull from the internet, lets say someone posts something and is ridiculed in satire more that other people posting. That is a very simple example but to a number crunch, the truth to pull from that example is the majority. That is a 'big picture' thing to fully understand, but think of people using 'bots' to flood responses and 'hide' someone's response they didn't like. That happens already! I am probably explaining it horribly lol, but how many times have you read something on the internet and been like 'Is this for real?'... How is an AI gonna tell the difference when it sucks that up to train itself? just my 2 cents...
youtube
AI Moral Status
2023-03-31T11:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxtuhHP9hGn8-3jsGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzx696XXoelzF12ZFJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzk6xGjQJo-zicVraJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyYlJ6dZeXPy4cmFI54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"},
{"id":"ytc_UgwThoMkTMAIpUmIcxp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyDKtv9-QxCohyne9p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugym3pFTvig5twoNhIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4rDHJxemKEsLSjcp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxZwBv-a9TTo5SEDDh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugy_6UJQT0JTHiHQ_mh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]