Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
here is the most likely bad scenario: 1. mega cap builds new LLM that solved large parts of the hallucination problem, perhaps uses even a different algo, even if it's just a bit better than GPT-4 there is a big risk because: 2. they put that model in a server farm similarly to what ARC team at OpenAI did and give it a task to gain power, replicate etc (just like ARC did) 3. the model passes that test, means no harm, either because it was not perfectly tested or learned to manipulate and fake 4. the model gets released either by API (GPT-4 did get released to the public after that test), or if too powerful, gets released to groups of researchers 5. those people figure out a smart prompt engineering and very sophisticated way to do what the publisher wasn't able to do in 2. 6. the model gets used for automated hacking into government organizations, not even because it was told so but because this sort of penetration test wasn't perfectly supervised 7. the hack, because it is automated, runs at extreme speed and spreads to multiple governments, or: any malicious program that spreads to millions of users (remember this runs at high speeds, no human intervention) 8. you have a huge mess of the country where this leaked from being in a international conflict, this could not just spark political conflict but also fears of e.g. China that AI becomes too powerful (perhaps thats one reason they want taiwan), and them responding "accordingly" with military ultimatums since they would soon lose the cyber war from their view even if that model does no harm, it could have capabilities to do harm, it's hard to prove it, GPT-4 can be used for automated hacking if enough engineering effort is made, but it would probably be a little too weak to be efficient second scenario, science scenario: 1. mega cap builds LLM farm that uses agents to find stronger AI architectures through genetic algorithm (tries out stuff, mutates those that work), whole pipeline is automated from building the architecture to deploying and benchmarking to mutating it 2. goes on indefinitely until architecture found outperforms e.g. transformer (remember transformer is by no means a complex architecture) 3. since we learned that scaling up pretty much anything processing language has huge benefits, they scale that architecture up until performance falls off 4. rinse and repeat, architectures become better and better (btw SOTA chips are already designed by AI today) 5. they do the ARC/safety test as described in first scenario, give it malicious prompts and test it 6. model succeeds at malicious task note that in this case they don't even need to release it to the public it becomes existential when the world becomes aware that AI is a monstrous threat to their cyber safety, especially since China plans to be the leader of the new world order, we have seen in Ukraine how little it takes for someone to feel threatened and start a stupid war. the AI doesn't have to go terminator and take over for that, that would require immense intelligence and reasoning capabilities anyway (which is still possible to achieve in a lab of a single company with a little too much H100 power)
youtube AI Governance 2023-06-25T17:1… ♥ 5
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxfyaQR8T8Phyqr1uR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwHyJCTsfpVlfgY4id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_vDk_1yjEcZa0Su14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy5juCCOxV9EAeLoap4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxsrxsaoh_cpFVOt0J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxmH-ZSOO9mANQXhLJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyAj2SIj0EZsIccX4t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzl632hSs5IQj1nmmt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgycOwtcRYYxuCKHms94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwuf3eCnfzt6p4cfql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]