Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Covid has taught us that we as society cannot collaborate and act in the best in…
ytc_UgzOFBuVt…
G
These people make trucking look awful. Truth is, at any point right now I can ma…
ytc_Ugyp8_VDP…
G
Considering how humans lack common sense, it will be a tough road.
You are talki…
ytr_UgzA5EWM5…
G
I don’t know just how AI they are. Seems more like a lot of “if this is said the…
ytc_UgxjnixJh…
G
Clearly ai infused that’s why he has to take a moment before he speaks so his ai…
ytr_Ugz3mjOhL…
G
That's a fascinating observation! It's amazing to see how AI technology can evol…
ytr_UgykXt1QX…
G
I had A.I. Read the letter as it is kind of in keeping with the tone of the lett…
ytc_UgwkcvDgW…
G
its just another example of how dorks let they’re emotions get the better of the…
ytr_UgxryYIZs…
Comment
Around the 37 minute mark the conversation shifts to the public and private faces of the billionaires who are deeply vested in AI and who think that they can use it responsibliy even though they see the outcomes as distinctly negative in private conversations. This is classic hubris, and wherever hubris goes, nemesis follows. (It is the One Ring problem of the responsible use of extreme power, and that even with the best of intentions that much power will ultimately cause catastrophic consequences. Hence my reference to Hubris-Nemesis. Even if it is *merely* unintended consequences, that is precisely what will follow, and we are headed down that road. I don't know what to suggest, because games theory will lead to races based on what other players do and expect others to do, and that is how it all goes deeply sideways.)
youtube
AI Governance
2025-06-19T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzqsPgpYMdeh4qBafZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD3Y_NWKuQK_w0Nyx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzaV0rP8zakKld5YR54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxXCmAKdGQne-DvCiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPU1mkzxXZ5aPr2iN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzo6vo1ugsWgWUfBeR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyugV5b0_J8gcBiIQt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwhwHmki-jJLOTxnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwR3VFGq_BQHK_rcBl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz6tH3QYNAX1SO_Y8V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]