Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can we do it? Meaning, as a species, can humans ("we"), create super intelligen…
ytc_UgzuxmR_s…
G
The robot is wearing formal clothes and the presenter seems to going to a beach…
ytc_Ugxfz52cC…
G
As AI transforms industries and traditional jobs vanish, is it time to rethink w…
ytc_Ugw6CSOAG…
G
People who call themselves artist after ai printing a picture are the same dudes…
ytc_Ugy6XUDVf…
G
_None of them are brave enough to risk it._
Are you talking about "none of them…
ytr_UgxhTP7yQ…
G
To be honest I expected a gossipy book.
Instead I got a well-reasoned and well-w…
ytc_Ugz0KJnUc…
G
They're pioneering AI that can operate on humans and replace family doctors, tha…
ytc_UgzRrpi-7…
G
Fun fact: one of China's domestic surveillance programs translates literally to …
rdc_ic0u3gt
Comment
Hi guys. I'm a software engineer. I work with AI ALOT. It will not take jobs anytime soon. Those that were replaced are being hired back. Our current AI is not smart or intelligent. All it's doing is regurgitating scraped data that has been trained. Then it uses Statistics to determine the best response.
For me, it saves time looking through docs or checking stack overflow.
Here is a recent example of something super simple I had it do. Create a travel itinerary. It creates it fine but mixed up the days. I corrected it and it made the problem worse. I eventually gave up and started a new chat.
I'm not seeing advances anymore in these models. Guys like Mr. Epic Beard don't know what they are talking about.
youtube
AI Governance
2025-09-05T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUa9QkEtNLwEbhHm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7lhSWmDc0-D_96Rl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwtRuWzbEci4IlLm0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxdHydX1heT_FqNVhh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZYTae9Nz4CbqDu8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzgrfsCP9DRVga7Nfd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyY9hXEnnjszhg0Fr94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy2YmTVJcUQOcExWZh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwwRrs0A_HFBE_oURN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxizrJDGh890PjGlkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]