Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alright so here we have an idiot who does not understand what AI does and does n…
ytr_UgxVO6lJz…
G
Ah yess the same people who keep helping in the development of AI say it's a dan…
ytc_UgwSU4qME…
G
I like AI art, but I also realize that to become an artist, one must spend count…
ytc_UgyJ2ISEZ…
G
In my company I found ai produces better, well documented, than our overpaid laz…
ytr_Ugz-64jb_…
G
Far more interesting than any TV show. And an interesting take on AI. Great ch…
ytc_UgzGUazI5…
G
Can we stop calling them "artists"? They are no artist. They commission a Pictur…
ytc_Ugw7QX8Yl…
G
One of the most striking arguments against AI taking all the jobs is: Will it be…
ytc_UgxXnZj4i…
G
I’m kinda late to this discussion, but is he using Ai to write his responses? I’…
ytc_UgwB0d1TZ…
Comment
😮Hm, as someone studying the humanities, I don't get how tech specialists are shocked by the threat AI brings at the moment. I mean, humanities has been writing about the dangers of science for about 200 years, starting with Mary Shelley's "Frankenstein". In the last tens of years a plethora of dystopian SF movies and novels have been warning people nonstop about the danger of AI.
Anyway, using AI, I am sure it is smarter than they are telling us, but the number one issue is that AI has no actual volition right now. All it wants is to run its program, achieve its programmed goal. The problem is people. They give it goals and AI right now simply doesn't go outside those goals.
You can't just make them spontaneously feel emotions. It looks like it's on the cusp of consciousness, but volition or motivation seems to be something else, something you need to develop and I don't think we know how that exists or how it develops in humans.
youtube
AI Governance
2025-11-27T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz_9zr2PfPbNwZxgPx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgycNgDE5MdO_16nrQB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxT1L8B--c3IqfXnSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8_rmMuTcc6pmjcxZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyojqJu_3Q02sPasW94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFmDrP0tGou2aObAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuvwdLL10Dfc_iE5l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAUwXja3-pvW_CAY14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwjyPFNv232woz3And4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJA4nq5N_yT8iSrpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]