Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI makes it, then it should lose value. But how do we pay? With our lives lol…
ytc_UgygoYW0Z…
G
I've been betting on a sect turning AI into a deity for a while, and I'm certain…
rdc_mdkro9t
G
I am a robotics engineer and it is not possible for a robot to automatically do …
ytc_Ugx12MlLZ…
G
Fear mongering for views is upsetting, Ai is not going to break down your door a…
ytc_UgzCuGo2J…
G
One of the cool things about art is that you can't pin it down. If you try to ca…
ytc_UgxO2Frna…
G
It costs around $10 billion to build and train an AI model. Shady things can to …
ytc_Ugwj-x9JE…
G
I don't use to use AI for coding, but I thought about what could be a good use: …
ytc_UgzYrgCjh…
G
Ai does learn the same way a human does just using everything on the internet no…
ytc_UgxQCjUBN…
Comment
The simplest thing that did not get discussed is that - it is not as if we are talking about developing and then letting AIs do whatever they will do to see if that is dangerous. It may be a bad human actors may define bad goals for them - for example a pilot who intentionally overrode the autopilot and other safety controls of the plane and flew it into the side of the mountain. What if the Tesla or Zoosk removed the training for safety from self driving cars. This is the problem with approach like that taken by Yann Lecunn. He for example thinks that good guys will always be ahead of bad guys. ANd I think Eli is basically cautioning us to be conservative - sure make progress but cautiously at every turn and more strongly conservatively in relation to AI because of its nature.
youtube
AI Governance
2024-11-12T00:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxwDnlEHA7QFwMzrZB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwGPNiP4G115HlCMmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxgn2QDG4u3GwUCBPh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz431MRgmzceabjLdd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcbFmhgeHbLrPqRyN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-xpntgp4QxxIED5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwePVVbMUGmOuwAgch4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNv7S5t7BOv9eoxYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnNR89T2lV3e0tf7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIZrGwu4CUO899WoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]