Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If it were a pregnant white woman who was falsely arrested then white people wou…
ytc_UgxNBDqfz…
G
AI absolutely could be used to effectively replace executives and management. It…
ytc_UgzzJh15h…
G
All of the thoughts, feelings, the person’s soul is shown through so many ways w…
ytc_UgxovPJGl…
G
I do think that there is a real possibility that AGI and super-intelligence can …
ytc_UgzaJfH6T…
G
AI can turn a spec into a fully coded implementation very quickly. Glad I’m reti…
ytr_UgwmeFbtd…
G
Between ai and augmentation you end up with state control that can tamp down any…
ytc_Ugw84K6Hb…
G
My prof uses Winston AI and im scared that all humanizer that Inuse will detect …
ytc_Ugx7gjerl…
G
Guys I don't think you know what the /S means... It means he is being sarcastic,…
rdc_d2yymqe
Comment
I would say that i view AI from a technocratic, futuristic, expansive lens. End result, i respect Tegmark's views a lot but value the impossibly wide array of unknown future benefits for humanity even more. And given the hundreds of serious challenges humunity has to face up to, including humanity itself, all the more reason to urgently forge a path forwards with AI.
I am concerned that AI could be used to control and manipulate people in the near term. And whether open source or closed is better here i'm not sure.
And exactly how to prevent a single person from creating bioweapons etc.. also a question mark. Perhaps practicaly monitoring and controlling resources required for this, using AI, would be one step.
youtube
AI Governance
2023-07-04T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwTjak25URYjjUSaHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYZxy_MuFCWovZcHp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPZK7sBFowZ7W59KZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzWoKTW1XVC3hRck-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzV3WZVP4EPhBf9YO94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMS6qzDPPh9v7z9V14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9QSOyAiqYzgFI_EB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyi_gkRyn5QFEohOMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhgbL9ssnNPSoPVXN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_bJsdj1oBgFlOo194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"})