Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Another commenter made a valid point. The CIA,& Another Govt entity has a stake …
ytr_UgzSZsa9t…
G
AI is not life, I don’t care how many mentally ill tech goobers push this crap. …
ytc_UgzoIeVJZ…
G
I don't get it why is he lying on this? that's not even true btw. Ask deepseek w…
ytc_Ugw425wlm…
G
There are two laws of intelligence: 1. Intelligence, whether human or artificial…
ytc_UgwONVhRh…
G
Ai is accelerating an already rapidly changing world. If it didn't affect people…
ytc_Ugw7scxML…
G
AZ, learn how to spell first, because somehow i would rather work at Amazon’s Wa…
ytr_Ugw00Ks1W…
G
AI will not murder us. It will honour and love us, give us great lives, but we …
ytc_UgxN6hKiS…
G
But what if the robot is already too smart and know how to play dumb by now and …
ytr_Ugjq1LHcz…
Comment
I’m a computer scientist. I understand how these LLM work. I am very interested in the field and follow it closely both in the product and academic space. I’m very concerned what AGI/ASI will do economically, politically and existentially for human kind.
But god is it painful to listen to Eliezer talk about this. Not because he’s wrong, I think his concerns are very valid, but he’s such a bad communicator. He makes it sound like he’s batshit crazy. He does not “meet” his audience where it is likely to find itself.
youtube
AI Governance
2025-10-15T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaD2umPmOzA1LG5td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKngtaKnq6I9KX2ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWWUxbn57DTangeKV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyLAsUWxo-NgOgj6It4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8SULrkjW6bu0TdkR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPBrin1P8uN0sKIs14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwQag7JWbslwo1Ewdp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4J4GDO4ey3uKlwoh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVkTPe3ipP2mY6UOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugywf6bvJ52Tch_vb014AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]