Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@barikins Yep, and so did those coal miners who worked all day and got black lun…
ytr_Ugz2WMz2x…
G
Again, the guy who runs OpenAI has a suitcase with a button that will destroy al…
ytc_Ugya2Uedc…
G
Humans consume, no spending, what income for this AI crap. Once AI design AI, hu…
ytc_Ugwtg5Kjv…
G
Descartes “Cogito ergo sum.”
Where does that lead us?
Does an AI think? Or does…
ytc_UgzBK-2ox…
G
In the early aughts, there were so many things to do, and so few ways to find th…
rdc_le51q98
G
AI will be smarter and more efficient than a doctor. A doctor is another form of…
ytr_UgzO3ySwn…
G
untill ai is allowed to make changes to its own code, it going nowhere to improv…
ytc_Ugxl1XVLT…
G
"I want everyday people to think like me, and I want to shove AI down their thro…
ytc_Ugx0yD3Lq…
Comment
Just look at the evil that is behind what the CIA has done since its inception and you will see what the danger of AI gone wrong can do. We are talking about undermining social justice, murdering people, sabotaging economies and destroying competition. It seems everyone is aware there is a serious creepy danger but somehow the ability to put our fingers on it would require us to admit some uncomfortable truths. Maybe we have a culture problem and that is already built in the first AI models
youtube
AI Governance
2024-01-03T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtLjg1wlOb9QIE3WJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNgKSDUDzNoATwhdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHbErHSY8WXDwnAz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx9JDhrgNLFUZ2vlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydDlneCG20YAl8Hzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMnTzPZZcD3_jSb9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxlEDPGxM-AsNNaXV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyWai18YSkKBxQe1at4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_EkIUNBPUs0me31d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwH9Y7QLFb8iCnZndN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]