Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI really chose to take over, the only way it would stop is if the company t…
ytc_Ugz1WHdEg…
G
AI is nothing more than a mass surveillance project. While we debate how many an…
ytc_Ugzz1lfJ3…
G
The danger of AGI is billionaires and government having it and not us, the peopl…
ytc_Ugx6fNM4K…
G
AI personally isnt bad. But the people use it wrong and call themselves "the ar…
ytc_UgxsT73ob…
G
Why not simulated pleasure. Rather than torturing a robot into cleaning our floo…
ytc_Ugzbd6o3_…
G
Well, it's just the beginning; AI may present flaws in image generation, but it'…
ytc_UgwD9BGGP…
G
All IM HEARING IS NEGATIVE REMARKS. PLEASE UNDERSTAND. THERE ARE INTITIES OUT H…
ytr_Ugx27Mhft…
G
the only reason generative ai concerns me is that how many fake videos or images…
ytc_UgxNryV3s…
Comment
I don't really worry about "AI" ... what I worry about are the Sociopaths that have taken control of it.
They keep talking about "aligning" AI, but the real threat is "mis-aligned" human beings.
Demis Hassabis at Google seems to be the only "good person" in the field who has actually wanted to benefit the human race.
youtube
AI Governance
2025-12-04T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzh-YnBzznNZ1qWyQ14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRQFKmdG19FO8-AD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyW-LL40QAOwv9pVQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgI5BFRrLZ996_qmh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPhQDdOKhYQvLluoN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwv4OkyAKs3UvA-eaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz4AjPalg3rFU87gH14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzncXg6mJHXTVaCS414AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFxasyzxX1MBY0FJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx9dnj66M0hUfEUgrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]