Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
İmagine ai getting so good before gta 6, we tell ai to make us our own gta 7…
ytc_UgzCajfEn…
G
AI can't dig a hole or plant a tree. It can't rake rocks or lay sod. It can't cl…
ytc_UgysAE293…
G
the human being interviewed is wrong. just because from conversations/questions …
ytc_Ugzj3AJNV…
G
I like to use fast food as a metaphor for AI "art." Does ordering a Big Mac make…
ytc_UgzlShtV2…
G
Microsoft had AI contacts with Israel military and intelligence services. This A…
ytr_UgyyRMJ3D…
G
i barely chatgpt. meta ai is what i chat mostly. i luv meta ai. i'm so sorry for…
ytc_UgzhR0yo7…
G
Turning loose AI and automation in the workforce will be how the puppet masters …
ytc_Ugw-Uj7gm…
G
I tricked the Snapchat ai once to say the n word like 6 times by pretending to b…
ytc_UgxW78ejH…
Comment
Large language models (all the hyped AI) work by finding patterns and extrapolating them to new situations that are still within the parameters of training data. They create text by basically mixing together text written by humans, that doesn't mean there is any sense of understanding - so when you ask for a story of AI taking over the world of course it can come up with one by mimicking other stories, but that is not evidence of the ability to attempt enactment. The real danger with AI is how god-like people act like it is, and the things they will entrust it with as a result. The real danger is its limitations, not its abilities.
youtube
AI Governance
2023-07-09T06:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxpurExIs38lMfycuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQFTzxBOiYC1NX9dV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzUIVJtfoWNUlfvdh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxaga64sc2N2WWQC3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx4QV2nbmtNvSErYv54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgztIvrxQz7unE8ATQ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGO17jS_1h2hbaYbp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5pELiD4P1oSaBL5d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxzfkf670YfmKR-xZl4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyWljfmrb7tGxy3gh14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]