Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have said this before. AI art in its current iteration is a powerful tool ONLY…
ytc_UgwtBnliN…
G
@rmt3589 people who think that they are „one of the last ones tricked/catched/et…
ytr_UgzZSjT89…
G
If AI advances at any positive rate, it will make humans obsolete. It is idiotic…
ytc_UgzfR8G68…
G
ChatGPT is already deceptive, but it could be because some topics are too danger…
ytc_UgwORJR05…
G
I’m genuinely scared guys I knew this would happen but I thought I’d be 40 I don…
ytc_UgwsVWCjl…
G
still references as ai will mess up and be more misinformational than a real pho…
ytr_Ugx2zK7mR…
G
Either way, whether I go out fighting the robots or AI takes me to space or it …
ytc_Ugy3n4mmo…
G
Everyone’s job is being automated. Look around. Even the CNBC anchors jobs is be…
ytr_Ugye3rdoO…
Comment
(chuckling) Well... we can stall all we want *now*, and pretend (to each other, and... ourselves) that we're capable of meaningfully regulating AI, but it's been obvious for years that we'd reach this point. Perhaps especially given the economic model that (currently) rules our lives, y'know? ...wherein the dollar-driven norm is to deploy new tech as fast as possible in order to gain first-mover advantage, regardless of risk(s).
We've seemingly passed the point that we can even pretend to imagine any realistic / meaningful guard-rail options we could implement as development forges ahead. If we don't have a Max Headroom "breakout" moment within the next say, ten years, I'll be genuinely astounded. And that seems like a very conservative estimate, given how just the last year has played out. What happens after that... really is anyone's guess.
youtube
AI Governance
2023-04-01T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxm4HsR3CUCXGEnKZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyEbAZEItFFYbWdo8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugze-2uUxuINIAJ8CgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzCXyoSURR5oyj2t-N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugwl0Cw5u30SGjdmgbt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfVhUaGEsokq7Zc8d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzNgG3hSaYTyN6N0eF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9yxXDzTe5SdsLMsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRDzoJO_FCjfy5fEN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw4GkdR-OZ4LMX9-6h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]