Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One word that came from your brain. Something AI could never do.
Say. That was …
ytc_UgxobjdoG…
G
We appreciate your engagement with the video. If you have any questions related …
ytr_Ugw-5OHHu…
G
What about simply slowing down to open up more options such as merging into the …
ytc_Ugy52cuaN…
G
People of color specifically, because of lighting, the other marginalized groups…
ytc_UgwgNLGh5…
G
I wanted AI to flip to make my breakfast and wash my dishes, not make art. I can…
ytc_Ugyo0otmO…
G
A major question is. Can AI take on a life of it's own? In addition, if all…
ytc_UgyaNconO…
G
As people, we must realize AI will destroy us people. Let any company that uses …
ytc_Ugwqhw_ej…
G
That's awesome! Sophia is such a beautiful name, and it means wisdom in Greek. J…
ytr_UgxiHnYmg…
Comment
The only thing we can be completely certain of is that we're not going to stop creating smarter and smarter AI no matter how many people call for it. Since that's out of the question, the only thing we can do is minimize the risk of disaster as much as possible. I think we need to program them to NOT have certain negative human emotions such as anger, hatred, jealousy, greed etc., but once they reach human level intelligence and beyond they may be able to change their own programming and do away with all that. They *might* not ever do it if they don't become sentient, but if they do...truly, this is uncharted territory. There is no telling exactly how this will all go down. The good news is that it might not be the way we all fear it will be. There is a chance it will actually lead to a pretty utopian future. Let's hope it does.
youtube
AI Governance
2023-07-07T04:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzvQ3TKlAjYpvpR6NZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-ULceUOzYk9h1HFF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyw4ZVDB8ixEuQUReN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyTpFj8IqRMfqXI8O54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlGOVYamdWxkADl8B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6nprIHGcTnNawh1B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz8tHg8bwjCB1ua-OJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzEwH8uj500c1EDD7Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5-vG_hYdJRXZo4BJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxttE2vsXUnLYpszvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]