Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with AI isn’t that it doesn’t pay creators — the problem is that it …
ytc_Ugzt1g7RH…
G
Honestly, I think that all artists (or at least many of the big ones since it's …
ytc_UgxdQz_R8…
G
"Could be"? If the people involved aren't already wildly aware that AI _will wi…
ytc_UgyCi0Hc3…
G
I’m using it to write all the boring and rote bits of code that I don’t want to …
rdc_o89ruq8
G
It’s insane we keep saying AI is gonna replace a bunch of jobs and people just a…
ytc_UgxeFjzwh…
G
"AI will never have feelings"
You should watch EX Machina.
You can have sillico…
ytr_UgxKAHhem…
G
Happened to us gen x ers too, in early 90s. I recall going to a 2nd round job i…
rdc_o46wis9
G
AI is a scam. Don't use it. Don't waste time learning it. Develop your own human…
ytc_UgxJ8x0Bn…
Comment
The AI that said "The first thing I would do is try to kill" was reverting to sci-fi tropes that it had been fed in its training. That was what it thought the output should be based on the data it was trained on, based on all the stories we have written about rogue AI, rather than what it would actually do. It was copying our answer. Of course we still need to be safe and controlling with the technology, but the scariest part is thinking about the countries with shady morals using this stuff. We can't just have all the code be open, or else Russia, North Korea, etc, would be using it without scruples.
youtube
AI Governance
2024-10-25T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzUmqVo_HTuagg4Rjd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcBNTPH5kxJopSk2h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJg-sg4xhCN2Ncaal4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxt6GreYqdrZACrAx94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzQkGh0Q9pIUJOqXKx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeNDvNzSLkFYzfxwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzWEIN_sod77fKQVcp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2Efdt9cjCFUOsttJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOhXh-YzwbnbNC0u54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1bem6zhZLwyjUFPl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]