Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@matthew_berman sure here are some remarks: (1) the paper isn’t putting open AI to sleep, it’s calling for a pause on training models bigger/more powerfu/ smarter than gpt4. (2) The concerns on A.G.I & the lack of safety and alignment research are shared by almost every notable A.I researcher, regardless of economic interest, including myself (Harvard certified Computer Science for A.I) (3) As mentioned in the letter, OpenAi did also address these concerns and that a pause will need to happen to protect humanity. (4) this video & apparently your understanding on the topic too, shows that there is a general lack of awareness by media on what it’s really about and why the call for a pause. (5) the potential, however small one may regard that probability, but it is non trivial: it has the potential of civilisational destruction. (6) the loss of jobs/misinformation are not the main concern nor the most imminent threat. 6 months are a time frame so that media, creators and the general public could become more informed and aware of the real danger of misalinged A.I, how close we are to AGI and that we, including the scientist/devs creating these algorithms, currently have no idea on how/why these systems make a certain prediction. (7) When 22 members of the center of existential risks are calling for a pause it has nothing to do with catching up with the competition, and you are missing the whole point.
youtube AI Governance 2023-04-24T20:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgziWZnk3z9JA4c34Zt4AaABAg.9nsWcOzGs3-9ouCLs42V_i","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyEDA0Qgd2wknJtlMN4AaABAg.9nsUFeTo--P9ntwYYrb4Mh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzcIhg-_RC6r58damR4AaABAg.9nsPyZxjLwq9ntII-i6qgD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy6sORpZ-8inVK8k1l4AaABAg.9nsDc2VopHJ9ntIY0oWyMy","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxuMEBh2QIqOGuP6rp4AaABAg.9ns6HzyWFxs9ntIslEf2v8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgxDZVJcQX3o6OpY3eB4AaABAg.9ns4n_EGWW39ntx68MSPJH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgzBim0lzz1951IYIOJ4AaABAg.9nrpFm1Nwsb9nrtmDbxEhw","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9nropYpoQoY","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9ns6CuUqNvJ","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9nsFhysys7R","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]