Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reminds me of a chatgpt session where I tried asking it to DM a game.
Made up …
rdc_ks4cags
G
It's like cooking. You can create new recipes with ai that might be good, but yo…
ytc_Ugw1nUM-4…
G
you've lost already. if the only critical arguments about AI's "learning" capaci…
ytc_Ugx9vp38O…
G
44:02 This inclination was already given an actual name. Very blunt and adequate…
ytc_UgzQkWyxz…
G
Automation has always been framed as a negative and yet we always seem to gain m…
ytc_UgyuNDAKL…
G
there was I thinking that the powers of the monarchy were stripped, and that the…
rdc_d7ksn9o
G
If all this attention was placed on how to use AI to fix our all our problems, w…
ytc_UgwNrDpRH…
G
A FANTASTIC (they all are) video about "Degrees of separation" shows that pretty…
ytc_Ugwf4XfrP…
Comment
I do want to push back in regards to the dichotomy of regulation to prevent extinction vs continuing on to advance past China.
I think this video overlooks that if we pause regulations other countries very well may not and will continue on anyway. If there is an AI that can shut down an entire power grid it will probably be used maliciously in geopolitics and I certainly don’t want to be on the receiving end — nor do I want anyone to.
Any agreements to halt AI development must be international and have an enforcement mechanism with real teeth to ensure countries follow it. You made an analogy to chemical weapons; they’ve been used several times since they were outlawed.
Would countries caught continuing dangerous development truly face consequences? It would have to be war or complete economic isolation - both of which are drastic and disastrous to the economy but anything else as a punitive measure is drastically outweighed by the pro of being the only country with super intelligent AI.
youtube
AI Governance
2025-08-27T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxBygBlckfU60wBxVR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzeS-tdYlBKhJo59Xl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJf8D7msjDMX8e_YZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyY1hg1dJ2n0yqo4aR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHLQ93nAOv4VA3x5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]