Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a bit worried about my job, not gonna lie. I think there will be work for …
ytr_Ugyy-1RDb…
G
Well, I don't think LLMs are sentient. But to be honest, we cannot know because …
ytc_UgwfOJOl8…
G
48:18 iduno when this interview was recorded but as of 11.14.25 i think the oppo…
ytc_UgzV-oOq6…
G
I think all of your problems raise from the singular fact that you, and many oth…
ytc_UgxYXJc-O…
G
There might be a bubble, but it’ll be different than the internet one. The basic…
ytc_Ugw0p9W65…
G
I'm glad this got picked up internationally. I teach in South Korea and one of …
rdc_clvanzk
G
Google dont use AI. Use a Pentium. A tool for Dummies. A kind of a Sleepe Cinder…
ytc_UgwJ-I_WD…
G
Senator, I can write you a paper about AI. It's a terrible idea and I think I'll…
ytc_UgzV5yxYS…
Comment
I think everything needs to be regulated including regulations themselves. The legal system in America shows that anything can be used for hostile intentions. The bigger problems are that our world is not at peace. And for something like information, which is what an AI is. To be limited in anyway, everyone has to agree. What good would it be to limit AI here, just to then let China go all-in? Ai is like Nukes. If one side has it and the other doesn't, then you're always vulnerable.
youtube
AI Governance
2025-07-18T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyHOIZfBBRd9rfuNu94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwlFjhJpVoPKMwaFB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZJav79amXxUrScRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIRSrEeeQmTeFaBAJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaiVul7ruK6mpzOu54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugze8LurW5gtBdJzUwZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgypQR8WGUurmdSbuKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz4glno3-lls5ppFXJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzvKoRRA6pTrrYBj8B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw9H3moKYVoHDenddZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]