Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Automation is good, individuals owning everything including all the benefits of…
ytc_UgzoOU8OF…
G
AI can revolutionise medicine and much more but only veganism can cure our ills.…
ytc_UgysU23Yw…
G
We are fucked PEOPLE! Less than 15 years and the AI will be the majority and hum…
ytc_UgwNpYoXK…
G
Nobody will probably even be coding in 5 years. It will likely all be AI…
ytr_UgxOlNLzi…
G
This is what they look now, in 5 years or 10 years, we won't be able to tell who…
ytc_UgzpCk-ms…
G
Have no fear. AI art will never replace human art. The demand for human art will…
ytc_Ugz-kVNC3…
G
Hii, we just updated the app (v1.0.5) where you can export and import the data. …
rdc_o7nteyn
G
That is big ass IF 😂 Notice he does not say open ai but the Feild and ai in gene…
ytc_Ugzb7-dPm…
Comment
@Alexander_KaleI wouldn’t say it’s overhyped in the sense that with how quickly AI is advancing and becoming more intertwined with our daily lives that if we are a society aren’t careful with how it’s implemented it could put us in bad scenarios in the future. For example Grok 4 already has an enterprise tier that’s called “Grok for Government” that was announced back in July. I’m just saying that soon these very advanced models/ LLMs / systems are going to be everywhere and they are already smarter than most humans. even the experts in the AI industry truly don’t know the dangers and here in the US they made is so there are less regulations for the next 10 or so years. Just look at how quickly things are advancing in 2025 alone compared to the last few years. I work with AI and love the tech so I don’t want this to come off the wrong way like I’m some kind of AI “doomer” because that’s not how I feel at all. I have seen such a radical shift in how advanced this tech is which blows my mind because most people really don’t understand what it’s truly capable of. Go look up “HeyGen digital twin” or “SynchroVerseAI” to see some pretty cool usecases
youtube
AI Governance
2025-08-29T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMOKkSTfPE6","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugxv_UEXyhKZ7R9Xii94AaABAg.AMKsrtpfL1uAMOTaSOzzuY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyyTU_-ZLDYNN5NIIJ4AaABAg.AMKs3PMvMIOAMOz2scifbH","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz_Q9TMlGfz7tOvZXt4AaABAg.AMKoSxUbyD9AMMkMIbVd0g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz_Q9TMlGfz7tOvZXt4AaABAg.AMKoSxUbyD9AMNi3PAx9lx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyCYeW-0dcc1esbUXp4AaABAg.AMKmVzfKNtXAMPlr2fr_tr","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwQ2Cu0lTRVOesepF94AaABAg.AMKcaXEYhl0AMO6ApBQKrr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMKqNkEMtjV","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMKtzCj0wfH","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwMZ3q4QX4DVeRWs2B4AaABAg.AMKVx5Hq5TiAMNvCNIRrne","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]