Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great. Now i have to ask A.I what jobs wont exist in 2 years 😂…
ytc_Ugw9MNLGK…
G
Those of us that work on making and training AI programs are being quite well.…
ytc_UgxVJrdZa…
G
In a perfect world, AI would be good. But in our world, it’s going to be very sc…
ytc_Ugx_mieWg…
G
If AI replaces every human job, who will buy the products and services these AI-…
ytc_UgyOdZBiK…
G
Ha ha ha. How much did you sell your soul for to send that? Or is it just a Puti…
ytc_Ugwn0_LZh…
G
Me too lol, my dad’s like “what if you’re not good at art and you wanna make som…
ytr_UgxtFhUCI…
G
AI is already autonomous to some degree. We're dealing with a 5 year old AI. It…
ytc_Ugz3Sgiuf…
G
Elon is absolutely right.
But being a realist, not a pessimist, I know where th…
ytc_UgyLgaxmU…
Comment
The Trouble is corporations own most everything their spending many billions every year on AGI. Their bottom line is trillions that their investments can make. They don’t want to think in terms of safety. Elon Musk has been saying for years to slow things down, It’s not happening. If we don’t get AGI first someone else will. So there is your answer, follow the money. AI may be built by us but it won’t ever be controlled by us. I don’t understand how anyone can think we can control something that is 100 times more powerful then are smartest human. If we have the smartest human, say for example, this person can speak ten languages and is a master in math. This same person knows next to nothing about medicine. Well, AGI, say at the present time knows twice as much as this person in math. AGI also knows just about every language plus every other subject, and can out perform most every human in two years. We’re not controlling it know and it keeps gaining more and more data every day. Nvidia is talking about more and more data storage etc etc more memory needs billions of dollars and so it goes. I agree with Roman, big money speaks and their not going to allow safety as their first objective. I don’t think most billionaires can possibly think in such terms. Yes AGI could help humanity in so many wonderful ways, but it just might be as distrustful of us humans, as we are about ourselves. It’s hard not to see the mess us humans have got ourselves in, but you must be blind if you see it.
youtube
2024-06-11T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzpLlNGFc3YJuNeRux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcmfqcgiBy3UKK_Dx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOpm8Brpy_RzBvFLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7WK25ydv724vvlfF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugze3e9pdWk1-9ARvlB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXEKfbMZq_SFkchH14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjIvlLUvmrQQGjE6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxTA72GRTokAuYcYEl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0wINNYX1bofNiRsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5pCVmEXXuxeS96mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]