Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a good question. Will an AI tell everyone "hey fellas I'm smarter than an…
rdc_kvslbu1
G
@ThePickelSurprise completely agree. I
don't even like ai for the most part.…
ytr_UgzcT4Lm2…
G
"I’m an artist too!" Says the idiot who uses AI to make "art" when even children…
ytc_Ugxwxy9xH…
G
I just asked ChatGPT which app has a ChatGPT that speaks and it said they can’t …
ytc_UgzB3FDpm…
G
Another great vid I really like the 3rd one where he didn’t have to touch the wh…
ytc_Ugx0FlTLy…
G
I have a video of Joe Biden doing a speech and he glitches on the screen and onl…
ytc_UgyH1A_dH…
G
AI will eventually make the 1% 10x richer than they are now.
Life will be terrib…
ytc_Ugy-JEgQE…
G
The number of dumbass comments here is more depressing than anything they've sai…
ytc_UgzUIKVkH…
Comment
Interesting, but this particular presentation by a world expert shows that the current global framework for regulating AI and avoiding runaway AGI implementation in the military realm is insufficient to prevent a global catastrophe. As we speak, the instant Isreali- Iran battle includes use of missiles using AI to independently determine, in real time, the highest value target to hit based on maximizing value to be destroyed, without limit. Having nations " talk to each other" is a weak paltry means of preventing the use by the first nation to achieve AGI to implement it in an unprovoked first strike attempt to achieve dominance, as demonstrated by the instant mideast battle.
youtube
AI Jobs
2025-06-15T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwmesvH7qUKMcKq1p54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKjlzbMLd8VVxx9zx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwr38ObcJXhfDa9US94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy16TPQpxEjwJNDog94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdtlvJg2Lol6sxeVl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9dqZla1BICt7KWw54AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz69kk2Fm-qhkej-vZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxHzSIhGqs4XUxLGtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKz-dXfR_A8RNzmtp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyT2jGPPP_5eC7pVAF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]