Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There IS something new here. Warhol and Fairey appropriated knowingly, often wit…
ytr_UgwdJNeE8…
G
Intelligence is made up of many smaller distinct functions, many of which tradit…
rdc_mzz78zi
G
Ilya's secret sauce is ISRAEL. Giving that NETANYAHU guy control to AI is the c…
ytc_Ugwr1R6Le…
G
I believe a lot of the fake republican and democratic in fighting is a distracti…
ytc_Ugw--3n2H…
G
Scale is the issue here we don't have so much compute as of now to automate ever…
ytc_Ugx8RIZz4…
G
Gemini 2.0 flash:
Okay, this is a fascinating question! Let's dive into the rea…
rdc_m2es4cm
G
This answer from ChatGPT)))
Thank you for your comment and we're glad to hear th…
ytr_UgxtdA6Y1…
G
It hurts the commercialisation of art, and makes it more difficult for people lo…
rdc_nhyxfxf
Comment
Very interesting interview! Thank you, Steve! 👍
Regarding the topic of AI having emotions and the example with the two robots, I would frame the story from a different perspective. I believe it's easy to determine, based on facts, whether the chances of survival are lower when facing a bigger threat—of course, the feeling of "fear" kicks in, and the smaller robot retreats. But this is purely mathematics.
I think the real question here is (in my opinion): even if the odds are heavily against it, would the smaller robot still fight the bigger one for a greater cause, even if that means its own extinction?
We all know of humans throughout history who changed the course of events, even against the odds—those who had the courage to act. It’s definitely a topic worth discussing. Personally, I don't believe an AI could ever truly sacrifice itself for a greater good—all of its decisions will ultimately be based on mathematical calculations.
youtube
AI Governance
2025-06-17T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwbdkr3ml7a0-zZcPx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz7gmyXtODls9kxa6R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGKEPXNgoHFnJtAwx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzt_2OxoDqm5l5Nsll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyrnqu1F0WC_g8UGoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJc9wuCumKLaL6F194AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxQfAaeUy9qBeJxhvx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWfKufPgBfM4km3Md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwl-vdW1azCqkIYXP94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQkltu9Du8C3vYBU54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]