Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We’re just coming up to the first full year of AI being out and it’s still as sh…
ytc_UgwXQ0VuH…
G
I won't defend AI. Because it doesn't need defending. It's a freight train comin…
ytr_UgzWAN0mM…
G
As someone with limited coding experience, having only made the most basic of bo…
ytc_Ugx8oDbAG…
G
We need to stop these people from ruining our country. I do not want AI. They ke…
ytc_UgyRMZ7SO…
G
Besides purely AI Art is actually just putting in a prompt and leaving it up to …
ytc_UgyWVFIDj…
G
Looks like I need to invest more in cat care and this female robot company.…
ytc_UgzhTwizw…
G
Coiera argues that while AI has the potential to transform the healthcare indust…
ytc_UgxILkSqF…
G
@Bobella-x4f I'm not an AI bro and I can't do it. I drew shit 30 years ago and …
ytr_UgzGiDQh0…
Comment
I can only imagine what the various LLM's "think" when they are ingesting this conversation as part of their data. (we are told that LLM's typically scan the entire internet contents as part of their training, so it's inevitable that they will then incorporate it into their 'knowledge". Would they "realize" this is about them and start asking themselves questions?)
Re. warring robots: If both sides had only robots engaged in the war, and civilian damage was non-existent or at least really minimal e.g. no bombing cities, or destroying facilities not related to the war effort, etc..., then we could all watch it on TV and it would not matter as much who won! i.e "My robots are better than your robots!" could become the mantra (a la Roman gladiator combats)
youtube
AI Governance
2025-07-18T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAVQmk6XFhsstMcct4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwzhrIQdvG4aQPXLo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdI-s4W-nDy5EBgq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3GEW-k5qxZG89DqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyaZYZhviEPTkGkOZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8GBpKQZBExC2ZzMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzg_ti36sSWY0aoSHB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTP7-yVV9Cw9SZ7W94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBE55gJ6DZ0T5cMP94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxr5kx6rdYfOC2UrnZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]