Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If only AI had arrived sooner, Bluementhal could have used it as an excuse for w…
ytc_UgzXFrKmG…
G
nonprofessional regulating professional prevent the innovation of AI in EU. How …
ytc_Ugzfp7hno…
G
The ai is due to a new youtube feature doing that to every video, not roblox…
ytc_UgwFT1_bS…
G
There is no way to unscramble the eggs and put them back in the chicken. AI is h…
ytc_Ugy12vefQ…
G
he has many points about the bad side of ai, but i just want to have a phone who…
ytc_UgyP6IzcS…
G
Ai learns from history 😂😂😂😂😂
He very well know white man's burden 😂😂😂 don't hate…
ytc_Ugy0tLJzv…
G
Passed one of those today. Heading west on I-10 east of Wilcox Arizona. There wa…
ytc_UgzPBmJn6…
G
@TheBlackMage3 this is what ive tried to say in about 20 different posts. Mind i…
ytr_UgxPlxB3d…
Comment
AI security / programmers had better start emphasizing ethical programming. That means "do not set out to non-defensively hurt, harm, or degrade others". Even that phrase's words should be very rigorously defined (you know how lawyers and philosophers are at word games). Otherwise we'll have life imitating art (the Terminator franchise). You may say "AI can't do that", but why take the risk. Here and no further, until we get a better handle on what makes a machine choose a particular 'thought' path.
youtube
AI Governance
2025-12-30T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxePmJwhHuMWggNQS54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwI2PM-569qkSVAOlJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgygRBB64Gf4gvarMEJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLwV9EUA0XH6jlIId4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2p9zViB7CimWjGUF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzzUYBmsVtzDRzCrBp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlS2mnhrakR8O-ByN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugz6xB0jWkolE4-jlX94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw-PyAjE89rOoELSi94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyVPRu5tcOp94-o_PB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]