Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
***** _"The purpose of an autonomous delivery vehicle would be to save the compa…
ytr_UgiPFgFua…
G
@ThaWuWe literally have no current path to an AGI you dolt.
Machine learning t…
ytr_UgzJ8y-9m…
G
Whoever programs a robot or a toaster to feel pain, recognize it as such, and be…
ytc_UgxWLQNzz…
G
Ai doesnt have soul and feeling in art thats something only we have so they can …
ytc_UgxzRFN-y…
G
i dont get why dont realize using a prompt is asking the AI to do art, so you do…
ytc_Ugy9ilT2e…
G
Work, and income is a small part of the equation. Universal Basic Income, and ev…
ytc_UgwYPwjxY…
G
I work in the firearms industry on Ak’s I’d love to see them program a robot to …
rdc_kyhlp2q
G
I think AI can be very beneficial but there are so many dangers as well. AI not …
ytc_UgwHt0Dph…
Comment
First he works for Google earning millions and now he is trying to save humanity? I have my problems with GH. There is a lot of speculation in the interview. Fact is that the architecture of AI nowadays is just more of the same and especially a lot more power. There are so many things AI is still not capable of doing. And overall there are so many things we even don't understand in humans like the brain, emotions, the complexity of the body, conscience etc. So yes AI is already better in things we are just not built for. But there is a lot AI can't do and perhaps it will never been capable of doing. So will it change a lot? Yes. But then we can update our social systems. Will it wipe out humans? Probably not.
youtube
AI Governance
2025-06-16T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwpQCAEMj2YVVGrrFl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUioWuEvE-yESjkCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxnkFNaAc9GH3kLSzd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgydWWjZQc4dPkrKDOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxM02cEbTvSJs5DqXx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyIMS2qAaNjZTy2BfJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxLHhLLLibxBNjvf354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJtNw4uGCD4YPxKLR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy9gCdS6baCuoXAs-Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0ZrBacsaDYebPZ-54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]