Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
look, there is a simple solution here, the robot should not even attempt those k…
ytc_Ugis4y0ec…
G
You are an artist and you keep drawing we all will support artists more than AI…
ytc_Ugw-jl2u-…
G
I hear what you're saying but could I suggest that you not look at where AI is n…
ytr_Ugw-xt2js…
G
We noticed you like BINDERS FULL OF WOMEN. Can we interest you in these BINDERS …
rdc_e7j1p3b
G
Moving faster is debatable, when you can't trust a single line of code and have …
ytc_Ugz51L4im…
G
If you think about it, they can’t exist without real artists. You can hardly cal…
ytc_Ugzr-_V1v…
G
An emotionless algorithm combined with congenital police stupidity and corruptio…
ytc_UgxWtaYdM…
G
Thank you 🙏 I wonder eventually will AI have an opinion on spirituality or how i…
ytc_Ugz42ak3h…
Comment
Bernie I love you, I voted for you in a state primary back in 2016, but on this one, I gotta disagree. I've got a tech background, I work with AIs all the time, and AIs today are nowhere near close to the AGI they would need to achieve to be on a par with human beings. AIs hallucinate, it's just from the way they're built, they use statistical analysis methods, and they even know and admit they hallucinate answers. Recently I had AIs tell me Charlie Kirk is alive, and Donald Trump is not the President today. And that's within the last week or so they told me that. I sent one links to news sites showing Kirk was in fact gone, the other I queried asking it, if Donald Trump is not president today, October 3, 2025, then who is the U.S. President today? It answered me "Donald Trump is President of the United States today on October 3, 2025." So even AIs know they screw up, and they do it all the time. If I had to rate AIs on a GPA, I'd give them a 2.0. In order to be able to replace humans in the work force? The hallucination problem needs to be fixed, their GPA needs to double to 4.0, and they have to approach HAL9000 of '2001: A Space Odyssey' levels of competency. And don't forget, even with AGI example HAL9000 in that movie, he suffered a psychotic break, murdered nearly the entire crew of the Discovery spaceship in that movie. So that's probably a problem we will see in AIs IF they do achieve Artificial General Intelligence (AGI) levels (that's a BIG if), if we get the hallucination problem fixed, that psychotic breaks and deadly behavior on the part of malfunctioning AGIs could be the next problem after the hallucination problem is solved. But don't fall for the tech hype Senator Sanders. The tech industry loves to hype this stuff, it's good right now, at getting C grade average answers from questions put to it. It's good at drawing pictures, it's good at creating fake videos. Replacing order takers at McDonald's working the drive thru windows? Not so good. Anyway Senator Sanders, keep up your concern for the welfare of the American workers, and stay concerned about AI, keep an eye on it, but not out of misguided fear it's so good it's about to put the working class out of work, but more for when it causes disasters because it was trusted too early to run dangerous systems and it hallucinated and caused a disaster.
youtube
AI Jobs
2025-10-08T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugya7n4dulBcwceU6-Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzZ86DC-27tB21CZrp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqLAyp2VoRpsTb7v54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyz2hgoeI7h38SJvyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwso5QY9qGCmCDi_bh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2tFQ9nGtqXaCTCaV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_dbFdie16cjRtX1d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2JXCBejLn5ANfyXV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTvre2IiY6uin90Zd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyygeikxHHIXDzJBqZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}
]