Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will not discriminate against any human based on Religion, sex, ethnicity, ect…
ytc_Ugy5BJVes…
G
Don't worry people. Trump will become president and then we'll all just be too p…
ytc_UgzT7F6Vc…
G
tech bros dont understand how tech works ALEXA IS NOT AN AI ITS A GODDAMN SEARC…
ytc_Ugz9oFrp-…
G
I tried the first one with Meta AI and it said
"😱 What a tough one! 🤯
If I pull…
ytc_UgyGdTR6l…
G
See this version... this 'realistic' version is like the very first of video gam…
ytc_UgzzTs8DD…
G
I want to be honest... maybe those who use AI to make artwork are people who suc…
ytc_Ugx7BMrfc…
G
Those who own the machieans will be richer then gods. Those who don't will die. …
ytc_UgwDZ-Phi…
G
Day by day my respect for EU is increasing. I used to think USA is doing great i…
ytc_UgyRNOQQz…
Comment
@KevinGulling It depends... It can do very complex math since it has a lot of training on mathematical concepts, but it can fail at properly carrying digits, similar to how it will mess up at identifiers or get confused about version consistency if you tell it to program something for you in one shot. The difference is that you can take that code, try to compile it, and then see exactly why it's wrong, which means you can bring it back to the AI with those results or let it do confirmation itself, like if you give it tools to confirm its results (like Python/REPL). This is why tool calling is more prevalent now. If you've tried telling an AI model to do something long-form or complicated like decoding from base64 or extracting text from a PDF, it will use tools automatically; but if it didn't have those tools, which it didn't for a long time, then yeah, it will be terrible at it and confidently provide/hallucinate wrong answers even for basic stuff because it had no symbolic grounding.
youtube
AI Governance
2026-03-17T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwWyW3vQg2x9xIptB54AaABAg.AURk1-1DAAIAUSjwE5WdQX","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwzGmtADaJEY6k8qmd4AaABAg.AURi2sb4-VbAUTdup9Exgj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwzGmtADaJEY6k8qmd4AaABAg.AURi2sb4-VbAUZ_GgE0DjN","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx9UBQQH62TXoM3hoV4AaABAg.AURhjNEiVbOAURn4xYUs4U","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgzaPv4Qlg_zxb4Bwdt4AaABAg.AURhQgzcup7AUSijQTkShd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxWkOCjBPsxNPNil3p4AaABAg.AUReWdyptusAUS1cAPNH29","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxWkOCjBPsxNPNil3p4AaABAg.AUReWdyptusAUS3WKb34Wk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx0ioAUFHKUh2EkHf14AaABAg.AURdsXd4PoVAUV7_fjOuWH","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"sadness"},
{"id":"ytr_UgyEYKV9ah0Y7WJ3eiZ4AaABAg.AURdVyQ5hJRAURg5zNhFoK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxJtANl7XWDUrYvchp4AaABAg.AURc-4KMPnUAURi0J1hB_2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]