Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So the Terminator movies werent make believe. They were in fact documentaries? 🤔…
ytc_UgyXnbGU9…
G
I think the argument for that is if they don't put in safety nets a super intell…
ytr_UgzHIV5xe…
G
how about the service stations that have meals showers and ither services for th…
ytc_UgwVa1hpw…
G
Pretty much any of the skilled trades are automation/AI proof. This includes: pl…
ytc_Ugz1d5ZTd…
G
I've used an AI chatbot literally for months on a daily basis and it's a fun con…
ytc_Ugz2jhnmn…
G
I’m a heavy VR user. One of the issues with VR catching on is the dearth of qual…
rdc_jhea6vn
G
no, it is different, when others artist copies art style they put credits, not l…
ytc_UgyfpB-Ij…
G
The biggest question? Would AI be wrong in sending us back to the stone age? I d…
ytc_UgxP5F09k…
Comment
When an AI can design the next generation of AI, that’s the “brown trousers” moment.
We are never going to be able to put the genie back in the bottle, there will always be researchers who consider “Can I achieve this?” When they should be considering “Is it safe to do this?” Like the sorcerer’s apprentice, do they have a way to stop the spell if things should escalate out of hand?
As I said, we’re not going to be able to stop this, but we may be able to stick a leash on this “Dog” So that if and when an entity does get to big for it’s britches and throws a major wobbler (and like any infant, it will happen), we then have a method to real it in, slap it’s arse and hopefully negate or at least reduce the damage caused.
The likes of Asimov spent a lifetime thinking about AI, it’s control (the 3 laws) and some of the possible outcomes arising from those rules, perhaps it’s time to heed some of those lessons.
youtube
AI Governance
2023-07-08T08:0…
♥ 19
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyz-qhKAgMsrCjKpGJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmABykHotX3Z2dH5t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQcfnB1kGxw-W-D_p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypERxbNRt1wu1U_xt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxO_lyab2YvKvJeB_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDGh2PZcEtax1yM614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzDwAM4-SoOR-W6Veh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz1DYWBFBAiRhzrXNl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8wc5R_6h1A3mDFD54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxcReb-5b8HoP_J0PF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]