Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like this guys is talking about the future of the humanity. There is noth…
ytc_Ugx966xqh…
G
Thank you for your comment! If you're interested in AI discussions, feel free to…
ytr_UgzA6Qjd4…
G
I gave AI all the relevant information on the characters of my novel, then I tal…
ytc_Ugx_Cruod…
G
So when we were told that modern AI was just a Large Language Model, and that LL…
ytc_Ugw0rfX4K…
G
What's the difference between an AI USER and a zombie? Is there supposed to be o…
ytr_UgwGBQTuW…
G
It might level access to information. That doesn’t automatically level judgment.…
ytr_UgxQCgdlg…
G
Nah nah nah, ChatGPT needs to go. They need to be shut down. Altman needs to be …
ytc_UgzNw4blb…
G
Such a troubling conversation when you consider that the evil they predict we mi…
ytc_UgyQfKB18…
Comment
At the start, I thought Elizer's argument made a lot of sense, and that Stephen was being needlessly skeptical. But, as the conversation went on, I came round to Stephens point of view, about the horizon of possibilities and purposes, and how they may be benign and non-lethal. I think Eliezer was missing a crucial step in explaining why many, or most , random possible objectives that AI target would be at odds with, or indifferent to, human survival. Or, using his own analogy, how he knows that the ships full of europeans are so close to the shore already, such that it's becoming very difficult to avoid the risk.
youtube
AI Governance
2024-11-21T01:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwtgMRUlx8PDQsv60p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzSY9J1xkmnPrSwoiF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPztZGi-69oOZrJ0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF6c5XmakE-LcmqJ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjPTEuuslTAXLHizl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxUROqrZeF3z7BSbD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzl1sBoihSxvv3zUBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2lFzn4P184vvMQI54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUElbE8rslrmeMpDp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQmPtgb-RgM6av4Hd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]