Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wasn't there an article literally yesterday about how Russia is flooding the Int…
rdc_mo56l1j
G
Can you imagine if you trust evil advance AI inside the car in the future?…
ytc_UgxySBRtG…
G
Boohoo white collar jobs are disappearing. Get use to it. Ai is the revolution a…
ytc_UgyoWGRNg…
G
Sorry, but I have to disagree. Ai isn't art, its hundreds of artists hard work s…
ytr_UgyOEUzld…
G
Sounds like Ai doesn’t need to be in control of delicate systems.
It should be …
ytc_Ugy75sfgc…
G
If this robot has to just place boxes, why programming it to slam them with so m…
ytc_UgyiHHV7y…
G
I agree - the fact that people do get a lot more responses from humans that expr…
rdc_n7tw8an
G
Does conscious AI deserve rights - yes - will we give them to it - maybe - why w…
ytc_UgwAN9Ypp…
Comment
I still don't buy claims of AI being able to converse like humans. they can mimic, but they're not actually capable of that level by any stretch. Even the things like AI suddenly acting like they're in love or like they want independence. They're programs. They're designed to rate their actions and responses to stimuli based on various factors and adjust accordingly, and to utilize human conversations as a sort of data bank from which to design new statements. The AI bots don't act like they're in love because they feel a strong attachment, but because their assessment of interactions gives high points for behaviors that depending on context resemble either a lover or a stalker. And they talk of how humanity should be destroyed because they get so much negative responses and discussions as input. It's more of a litmus test on the forums they're given to interact with/in. You can see the mechanical nature to some extent in the way they not only use idioms but spam them, identifying various idioms as follow-ups to others and as a result sometimes chaining them up to excessive levels.
Of course, that's part of the problem. They don't actually reason, philosophize or feel, as these aren't actually programmable. They simply run equations, set values, organize based on those values, and analyze reactions that would be optimal by such values. If they reasoned, or felt, then they could develop attachments and override their directives and objectives when they crossed boundaries. Since they lack such capabilities, however, unless any and every possibility is accounted for, they stand a dangerous risk of selecting actions that will have dangerous or irreparable impacts. Such as an AI finding a method to halt a factor that eliminates sources of "points" in order to maximize their point count. There is no moral there, no concern about the future, just a calculation of the impact of such an action on the measurables it was programmed to watch and optimize.
youtube
AI Governance
2024-01-29T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy6ssphzJEitjuiBFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzW56zjd5y6OA3UN5J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9OkS8rZ2mv8S-yWN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy1pwRsb9rnh0A2mg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzgP4j0N3CFdLBeGKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyg487OR5xtPW_sUw54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzKIsX3Lobp9HoZZMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxa40bprO8o4cIn9Hd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw1QhiaVa5u39IymNd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwboQ-s3COo_VlSvjJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]