Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Modern Technology is really Take Knowledgy or what they call Data collection. AI…
ytc_UgyZtyIVE…
G
The goal of AI and quantum computing is to enable eternal life for the ultra bil…
ytc_UgwBqlQHR…
G
Every big tech company does this cycle. Hire aggressively, announce "AI transfor…
rdc_oac3pd6
G
All laptop jobs will be replaced by ai soon enough, I say this as a software / c…
ytc_UgxMGLb20…
G
Honestly, I dont care if ai becomes maim stream, cause all it means is that arti…
ytc_UgxqFLPvN…
G
With all due respect, every time I hear the words 'AI' and 'safety' in the same …
ytc_UgzB-TKpb…
G
Streamers like Asmongold already exist solely for the audience to regurgitate th…
ytr_Ugwy6DPwu…
G
This thesis is for naive.
Actually, rather sooner than later, AI will be able to…
ytc_UgzAe3i6E…
Comment
I think what he says doesn't make sense, when he says that AI is becoming more and more intelligent and surpasses humans in intelligence. That is, it has 150IQ, 170, 250 and then "takes off". What good is a car without a driver, if the pilot no longer uses the car. And if the man feels and the car doesn't, or if the car "felt" better, it absolutely couldn't do it without a driver. Where does the car "feel", from its own consciousness in relation to the man or only from what the man feels, being incapable of doing so. So how can it become more intelligent than you, since it depends on you and especially since it doesn't feel, it doesn't have affinities, it "can't make mistakes" because it can't recognize them. Everything seems like madness that doesn't make sense.
Only based on your observations does the AI discover its mistake, and only in a closed system like chess, when it can "see" itself. In an open system where imagination and desires drive existence, what does it do without humans?
youtube
AI Governance
2025-12-04T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzeR1W5VDn--za2c8x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxl9wpvvs9iFpVseb14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz99OwhUI3RKR4bi5h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzmO8O1N2RjBbwxVdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvVSn9B4V0m92rK6B4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyINU-raCjuO7T0GgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxYnT6cvUSYcGGPM94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoIzPw8kdAvShfoW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyFZF6HXxSTfYp_-il4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxYGlJSn9cEXEitIXd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]