Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing about AI is they generate fast but they also get stale fast, kinda lik…
ytc_UgyUS0poG…
G
In a way the structure of our "social coexistence" is still on the archaic "ape …
ytc_UgwKwYaEt…
G
Microsoft controlling OpenAI is literally the worst-possible nightmare-scenario …
ytc_UgzOB1tJg…
G
So you're scared of AI and computer animated graphics...? Because this is not re…
ytr_UgwDV8RFj…
G
The closest thing I’ve heard as a directive for AI moral alignment is:
“Act in …
ytc_UgwmRXUeG…
G
You have to find the sweet spot. Too much context window adds noise and decreas…
ytr_UgzfpwK2w…
G
*This post was deleted and anonymized. [Redact](https://redact.dev/home) handled…
rdc_o777zn3
G
Many docks are badly disigned and placed in difficult spaces, the robot may driv…
ytc_UgjY_tbsC…
Comment
Not wishing to be unkind to Geoffrey but he is far too intelligent not to have considered and realised the likely consequence of his younger driven work in creating A.I. back then, and it's obvious Super Intelligence that would result ! To now be advocating for fallible humans and Governments to be sensible and create effective controls for the monster HE has created is beyond naive. He not only opened Padora's Box, he created the Box. Why did he not develop failsafe controls in parallel alongside the A.I. in its infancy ? His advocating for others to create controls now in the latter part of his life won't give the redemption he seeks because I think he knows the Human race is now doomed as a result of his work and probably sooner than later. No matter how much you try to train, trust or control an apex predator, it will always default to being an apex predator given the opportunity. Oh and the irony of this indepth interview being interrupted by a self promoting advert for advanced tech' A.I. type services is beyond tragic !!
youtube
AI Governance
2025-07-12T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyRo_5YgKx35R_jH_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzII9VmMAIgrRb3Pvh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxS2vPZvRwu_rTGXcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzn5zzxXUeO--BHc194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8IEhcP0xs00-3-WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiWYfhnrjSVCi6Pm94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcHLoOBg_YE0TrYBF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzVGCS8fjgc3ZfD05x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5r_XEf48CtZ0SwCV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzMTmZc3RghCo887HF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]