Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI may have accelerated the devaluation, but a lot of degrees were already worth…
ytc_Ugztp1vRC…
G
To risky to make ai robots if they turn we fucked no chance I would let one near…
ytc_UgwHE08jv…
G
If I had a nickel any time someone on the AI art debate made a strawman i’d be r…
ytc_UgxtNyg-X…
G
I don't buy the 'it's stealing" arguement either. AI learns much as Humans do, …
ytr_Ugymj3n78…
G
Other than the fact he’s a raging leftist-what makes him think Musks a bad guy? …
ytc_Ugw52KzCb…
G
im still in school i want to become an animator and a mangaka i spend most of my…
ytc_UgxNavqiz…
G
The dumbest part about people calling AI art “theirs” (excluding the whole conce…
ytc_Ugwd64qVl…
G
IF A.G.I is achieved then "super-intelligence" would be an inevitable next step …
ytc_Ugyv8wUqd…
Comment
AI has been developing for decades by many people, eventually with two different schools of thought. As a scientist he tried to figure out how the human brain works, by trying to build, emulate its mechanism with AI. He looked at how closely AI would behave to the human brain, as a measurment of its advancement. In 2023 he realized that actually digital intelligence has certain huge advantages over biological intelligence. Also, AI advancement has only skyrocketed in recent years, and before that there also wasn't an AI race going on. So all things considered, previously he quite reasonably thought that there would be plenty time, like at least 50 years, to mitigate the risks of artificial general super intellgence.
youtube
AI Governance
2025-06-27T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzyaRSCARr0fIUe9Wp4AaABAg.AJsQt6EEzWmAJsVj4ePgQ_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyFnLjzoawFjfUW5Kl4AaABAg.AJsPj7Dbh_JAJsSufnTfSo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgycsztA3xJA7F5fo9l4AaABAg.AJsPPRpq43qAJsYqpBFMUi","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzbtBkRfhwTBvHnQz14AaABAg.AJsHNffFZ3EAJsaQAZ7Fxq","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzbtBkRfhwTBvHnQz14AaABAg.AJsHNffFZ3EAJsha3GhfaM","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugykr5XJx05g1E09uSt4AaABAg.AJqu4wJOKBkAJqw1dy2pmJ","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwbFbGrm5bYedF8hG14AaABAg.AJqcxW73zGaAJqv0GN_14V","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugy0BtLvbfuIP4yw_zZ4AaABAg.AJqckXlbHgdAJqvVBfOvRW","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgwBBxXQm1po47ss8Mh4AaABAg.AJqVrupLHiQAJsLKv0pSHQ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwBBxXQm1po47ss8Mh4AaABAg.AJqVrupLHiQAJsMFStYc51","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}
]