Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
they do touch upon the subject if you listen to the full hearing. the senate asks "how long until AGI?" - the general response was "we don't know, improvements to AI happen quickly so it's important we get ahead of this." they also talked about the dangers of plugging current AI systems into something like military weapons, letting an AI freely perform tasks on the internet, as well as ensuring that you shouldn't be allowed to release AIs that can self replicate. the general response to all these concerns was that companies creating an AI could potentially require a license if you are serving a user base larger than X number (the example given was 10 million to 100 million users). another topic that was brought up was to create a government/global agency that enforces these laws as well as having a moratorium of scientists that work for the government to ensure agencies are keeping up with the advancements in AI so the correct things are enforced as time goes on
youtube AI Governance 2023-05-16T22:5… ♥ 1
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgxCcb86nnkWt4pW-kF4AaABAg.9oY-SbehkMZA5tZrjAjS_h","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz_4uL68XEflzgsmqp4AaABAg.AOOgVTlWzF_AVT1dcVMYFy","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzY3g868RP0sUhVc9t4AaABAg.ANXkw-RgebbAVT1HK-WmUU","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0a8H-SxY","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0nYC3Luw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz4exGNsbg8Xwuu7hp4AaABAg.AN9GrtnEQIUANG1p_F6Xh1","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwWkqbHIEzsp5dv6at4AaABAg.9poTbhxQYI_9prl_C5QS55","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyaW4tT9gJkU9kk7Y14AaABAg.9pnH7lNDcvg9pnMCaVvC9H","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pn4wZNU7r_","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pnhwrZkWP3","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]