Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what fascinates me the most is not how humans would put decision making in the h…
ytc_UgyhDaGkP…
G
This was a very AI-hypey conversation and I found it very annoying. Boo thumbs d…
ytc_Ugx4EJsMO…
G
Name itself suggest that it can perfectly drive itself and avoid all collisions.…
ytc_Ugw6voDGF…
G
When the world will be ran by machines Terminater is here when they make movies …
ytc_UgzX56iFG…
G
I had watch this video at night 1am and yesterday night i literally not got good…
ytc_Ugx1hTCZx…
G
There is more than what’s being said, and even then, everything being said is an…
ytc_Ugyh4lqML…
G
Imagine if we all had the time to learn how to sit with ourselves and self actua…
ytc_Ugz1Zm_h0…
G
Everyone is making the point that all the cars should be self driving and connec…
ytc_Ugh8Tr7F8…
Comment
The belief that we as a society, would relinquish the power of an AI or worse, AGI, to be exclusively used by a Worldwide Government, is absolutely ludacris. Why is Geoffrey blaming companies? Because companies and their utility of AGI would exceed that of the Government capability and would be able to, along with society in general, keep the Governments use of it in check. AI is the future "weapon" society will need to protect itself from bad actors that include Governments. Guns will basically be effectively rendered useless against this technology in terms of protecting yourself, your property, and your inaliable rights. If you lose, in effect, the 2nd Amendment by losing access to AI, society will be left defenseless. Geoffrey has substantial benefit to gain from his vision, holding a prominent political and technological role in a world-wide government as its chief AGI architect. I cannot imagine a worse situation. Giving an all powerful government, with absolute global control, absolute power. Geoffrey vision is worse than the modern day Oppenheimer.
youtube
AI Governance
2025-06-16T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzbfr58Vi_2kWOEvSN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSBX2ZWxLgE1SfIAh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxnjEJigcgIkcpmEmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyb4zVfTw9Z7ez4EIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHDJWYmnovNazeDh94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxt41MZMXzszpssEPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXA3t6K4KFYSdhdbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4oAsKkKsfWeFlhpJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwT5Sv5doQu2QZnDe14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyHjNkMj28YtJmmqLF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]