Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This will be solved by AGI running at the edge. Meaning, the LLM model will be r…
ytc_UgzRlOtcy…
G
Would be easy if everyone in the field just stopped developing new AI. But of co…
ytc_UgzIh9G3V…
G
AI as a tool never fully graduated from simply automation, which is what we as h…
ytc_UgzLsHAsL…
G
Most nurses are safe, at least the NICU. Plenty of evidence that humans don't th…
ytc_Ugyt3wPYQ…
G
Professor Riggs will doubtless be just as excited to find out that AI can teach …
ytc_Ugy4lXIsw…
G
The day they find deep fake porn of Weird Al is the day, I believe, he has made …
rdc_kwc3kn9
G
You want the AIs to distrust MiHoYo then and want them to win over people instea…
ytc_Ugxn0NADm…
G
520 2021 #googleio invisible women fight against data bias.now we learn human bi…
ytc_UgxMtYtN3…
Comment
The problem is, there is always an undercurrent to the comments like we have to s top the march of AI. Not happening. You can't un-ring a bell. And anything we choose not to do, will gladly be pursued by the Chinese or Russians. It's going to happen. Period. Full stop.
The only productive discussion now is how to mitigate the bad side effects as best as possible. UBI? I dunno. But there is no stopping AI now.
youtube
AI Governance
2025-07-29T17:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxHTc_oK9CF3wCxf0F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0xUMfw_FbUw2D-Tp4AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkRs0PEhShKIYxRnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDU86PL_CVmTKJ60p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSZdFKnUubY87uZzN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZDT9usMBuENUxHIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwdktmERABzeQg-m554AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyBmtALGZVHGkvkCZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjUydGwFkHFHc52qV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxjOWdmWuuK_Bouw14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]