Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI tools are helpful, but Axalem helps me maintain sharp focus when solving intr…
ytc_UgzhQQseo…
G
Sounds a bit like all those linear progress models from the 70s. But they failed…
ytc_UgyZ2bPbI…
G
AI isn’t programmed but trained. And how do you train a neural network to care a…
ytc_UgwBuWyXb…
G
Generation z is so lazy they have to middleman picking up food but somehow think…
ytc_UgwfDvnIV…
G
His videos are so old I bet AI has been scraping them for a while…
ytr_Ugxx0sdiD…
G
The flip side is would a manned vehicle have gotten that close to begin with. Wi…
rdc_ecyvj9p
G
Can you imagine the program set of an AI built by the CCP? I don wanna. But I …
ytc_UgxgFdxaC…
G
This thing can't do any of those things yet and it's already more useful than yo…
ytr_UgzYbV8C0…
Comment
Before February 2025 maybe I had a bs idea of 'AI danger'; a superintelligent AI like AM or a robot 'smarter than humans': but those things are just not possible. Really, humans have to define 'intelligence' and 'creativity'; even if they do what does 'more intelligent than humans' even mean? Personally I can't see how humans can build something 'smarter' than them; it doesn't make sense. I do not think there can be a well established definition of intelligence - let alone consciousness. Aren't humans mega intelligent already with what they have achieved? Aren't humans excellent at combining pieces of information together to form new things?; aka creativity? Those super-robots are impossible to exist.
_[written on 11th June 2025 12:52am Wednesday]_
youtube
AI Governance
2025-06-10T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxFLVUvQbV7r2ZYbqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlZRdK2bRNgMjRmKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgpnA3xfAaWj_9cb14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIuMk5aRe0tJQ9dS14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyx_f4_R5NYLk1hsCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEsG5622zFTRs22o14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzFzZd7YS5dEHNfycF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxr7QVD_p3_1Z4ui6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPR4uDXxHVpN6MJsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_bkLDollpcbh2dMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]