Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Jessica David Bro... I hate to tell you this but Call of Duty -has- is artifici…
ytr_UgxsRaTch…
G
Yea I noticed from the beginning LLMs speak exactly the way politicians do. Soul…
ytc_UgwNRqsp1…
G
@ysgramornorris2452 ai has gave way to many neuroproteins and other things as w…
ytr_Ugzljq871…
G
Nobody is paying hundreds of thousands of pounds to have a robot that changes a …
ytc_Ugx3jFkXl…
G
It's difficult to refrain from being concerned about watching Geoffrey Hinton(TH…
ytc_UgxKFmQQ3…
G
Shud yall just now realizing the Genii is out of the bottle. Ai will be the end …
ytc_Ugz7NKROT…
G
Yep.
This so-called Artificial "Intelligence" is just a cover for massive Intel…
ytr_UgytEtleL…
G
valid point, though i think that using ai for therapy is probably worse for your…
ytr_Ugy2NHzyb…
Comment
IT jobs will be all right, nothing much to worry about.
Think of patents, corporate espionage and data leaks, just to name 3 from the top of my head. All reasons why any company is wise to be extra careful with going "AI".
Remember, everything you ask the LLM, will be used to further train the LLM. Hence when you feed it your source code, it will add what you feed into it's LLM.
The things you stick into the LLM are absolutely not safe and secret. They can be retrieved. For now with quite the effort, yet still.
More concreate: The database(s) is a companies foundation. It holds nearly everything the company built over the years. Feeding the DB into a LLM is practically sharing the companies core value.
We'd have to strongly imagine that all communication from your browser client upto the LLM is ... well, what, encrypted and up to the highest standards of data privacy?
And even if it's not the database, imagine having any closed source software vendor sharing their core code with the LLMs. Not going to happen.
Don't need to share the core code with the LLM? How is the LLM to know how your coded representation layer is to communicate with the business layer code if the latter isn't shared with the LLM? And even if you'd feed your represenatation layer with the LLM, we shouldn't be so naive to assume the LLM can not make enough educated guesses on the business layer through what it is given.
Besides, trusted handling of your data within the company located in the USA? Don't make me laugh!
Right, over to the jobs of lawyers.
Unless you specifically train a LLM on law, you will have grand errors in what ever the LLM gives you. I know first hand, because I've been there.
And even if you'd train a LLM on law, it will still be a bad idea. Again for the same reasons as described with IT jobs.
And lastly, there is no "AI" on the horizon. There are only LLMs. And they are not bad, but not good either.
youtube
AI Governance
2025-06-16T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzDiZU493yEun7ATSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIAAizQJRZBPaJIth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzekWLeHeiRbQwqyTN4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjCy_k7-vjywMvCp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxatEKB_4tImoezFsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyowGAVf4v7z_9d6cV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymaSC8979G1MjsnGB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyIvnGaE9CrNLLshl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFQh-P2b2k3VIHIJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNCgv5_tk1CMZER8R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]