Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ 8:10 Always wondered how Amazon was so awesome when it came to taking my order…
ytc_UgzSR81ZO…
G
Fun fact here in colombia colleges decided to develop systems that try to detect…
ytc_UgwM4I2tV…
G
They’re amazing, mine saved me from hitting a full grown buck that came complete…
ytc_Ugw5H3ENU…
G
yeah at that kind of speed i dont think our AI could detect a wall in front of y…
ytc_Ugza7iyda…
G
Though I hate how toxic some people in the AI community can be, I'm still on the…
ytc_Ugw6mJ3tj…
G
But where can we learn more about machine learning, like materials from Google, …
ytc_UgxUF0NgM…
G
I can't imagine believing "art is subjective" is a sentence that makes sense.
"…
ytc_Ugy1ncHsj…
G
Should really stop calling the "Artists" since they're clearly not and will neve…
ytc_UgzTfAloo…
Comment
He just thinks skynet terminator, because his concept of AGI is not specific. Skynet is not even a general intelligence. It had one purpose and that's top-down maintaining NORAD. We have general intelligence maintaining NORAD right now, called corrupt generals that want to nuke Russia and cause WWIII.
The quote on quote "artificial general intelligence" is very non specific, because it is general in what sense? Ignoring contextual information?
I think we all fail to classify AI and we massively overestimate or underestimate it capabilities. Elon probably thinks it can create a black hole the size of the milky way in an hour, given a supremely intelligent being with a small interface to the world, e.g. displaying text and receiving text. This is probably not the case. Omniscience, which is probably the upper bound for AGI capabilities, won't be any like omnipotence, which is probably what most people are scared of, since sentience without potency does not affect the state of the world.
youtube
AI Governance
2023-04-22T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqXrro1BuHgZclS5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyvc9qyXvpo4uuQoSd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm90Cz9ic2BuQANbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1bTssf0RY0H1PJEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxcsfOIvdygXEZXlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJrlVBFipR9PQPldV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6J6rNXfpU8X0dJpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPAxazlT4uef736iF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPhI6BLMLvjUyXZOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH_6XkcCtqaALxZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]