Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Videogames have been using Generative AI for ages to make infinitely unique maps…
ytc_UgwwmDLXB…
G
i think i agree with claude since without power, hospitals are cooked and the pe…
ytc_UgzAuv0H8…
G
Governments should intervene? Lol the government is more corrupt than anything A…
ytc_Ugy5GcBDX…
G
Tldr: Generative AI isn't Hollywood AI and I wish people would stop thinking it …
ytc_Ugw6YTviQ…
G
Omg, what is this lady saying? Ellon musk started open AI because he was scared …
ytc_Ugxi9Sr1E…
G
Haven’t been able to turn on any form of social media in the past two years with…
ytc_UgwJ-QRI1…
G
It's effective, doesn't get angry and let the student ask questions that he/she …
ytc_UgyVQjqbr…
G
It doesn't make a difference suicide by AI is way lower than any cause of suicid…
ytr_UgyYpo57O…
Comment
I have to add, after some more experimenting with AI tools, there are AIs that do actually work well. I started using Claude Code and that has impressed me. Figuring out itself without my input to test its changes is pretty impressive. Other than that it's an impressive tool and O can actually build complex things on my own exactly the way I want to build it for my own projects.
youtube
AI Governance
2025-06-18T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyXq19Z7WKkUIyLMSR4AaABAg.AJRz7Zyt2VSAJS1rV55or7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyXTXV56oZbwcc3fhV4AaABAg.AJRxzrZPqOgAJS0Kul27Cd","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzyKHTttpMgMEfkbed4AaABAg.AJRxVjdCZRvAJW8kt0eBig","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxMC8zNuWTpg1bZ4pl4AaABAg.AJRxSqyveJqAJSMlD_xiHK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxMC8zNuWTpg1bZ4pl4AaABAg.AJRxSqyveJqAJSQsAtE6qt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxMC8zNuWTpg1bZ4pl4AaABAg.AJRxSqyveJqAJSYWGK_MxE","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy25lAtNSqnJa4IRX94AaABAg.AJRuuivXYzJAJRvaoOv-es","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugy25lAtNSqnJa4IRX94AaABAg.AJRuuivXYzJAJgtvevUO6p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx3YiLqOIw62el1oMp4AaABAg.AJRtru2lnZPAJSAsQUxQXG","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxtzjiicrGGXTP5QnZ4AaABAg.AJRtQE81xnZAJT1buQ1E5R","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]