Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@osuf3581 So, first of all I want to thank you for the detailed response and also for the lack of hostility in your response, as this is something that seems to be rare on the internet, sadly. In regards to what you have said about Marcus it is definitely very interesting to hear that he seems to have ulterior motives and it’s also interesting to hear what view the field has on him, from your POV. However, even if if is due to ulterior motives, I have to say I’m glad he served as a kind of button-pusher or critic of Open AI and LLM’s in this meeting, as I have become quite concerned of the emerging technology as well. I have just finished my Bachelor’s degree in Psychology and over the past 2 weeks I have been reading papers and watching videos endlessly on the topic of AGI, LLM’s and their potential implications of the future and current AI in general. As I was not familiar with the field (up until 2 weeks ago) I had no idea how these LLM’s worked and I still don’t to some degree, even though I am aware that even the engineers themselves see it as a black box to a certain extent. I have mainly been informing myself through people such as: Lex Fridman, Geoffrey Hinton, Max Tegmark, John Vervaeke, Marcus and Eliezer Yudkowsky (even though I must admit, he seems to be more pessimistic and hyperbolic than the rest). I was wondering if you had any recommendations of either books, videos or just people in general, for me to get a better sense of how these LLM’s work and how people would even be able to regulate them. As of now I understand they act like neural networks which have over a 100 billion connections and run on 1 megawatt of power in contrast to the 30 watts used by the human brain. Due to my background in Psychology I can also imagine what is meant by the use of statistical inference in, for example, ChatGPT. However, I still find it hard to understand how pattern recognition and statistical inference alone can lead to some of the answers, which ChatGPT is able to give/come up with. Something I also seem unexplainable are the hallucinations given by chatGPT. Why would it invent references to papers that don’t exist? What’s the computation behind that? It would be much appreciated if you could help me out by referring to some things that might better my understanding!!
youtube AI Governance 2023-05-16T23:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxCjMgyRERZJIkf8xZ4AaABAg.9pnEEvJXGCz9ppLdG82g68","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz7uwC_XdVYigT_wMN4AaABAg.9pnCNopi2FA9prnBkY3yt6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzHDvZhbtDqDqGE9994AaABAg.9pn8v-yF-1O9pw-tfinM3p","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzHDvZhbtDqDqGE9994AaABAg.9pn8v-yF-1O9pwkb4wncvY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugw664Rx60xutHn03-t4AaABAg.9pn7EnsV1uU9poyLPNYwab","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugx7ziFqJsBPmaKCYoV4AaABAg.9pn1pBSkPYh9pnIMmjWqLX","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugw1lGgAunYPeXO364Z4AaABAg.9pmxRaW3Tdk9pnHQ8-VsbD","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugw1lGgAunYPeXO364Z4AaABAg.9pmxRaW3Tdk9pnLWRnnPqt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugxz8_9G92PzqsTLWCx4AaABAg.9pmnSIUISGq9pn43JReBAX","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytr_Ugxz8_9G92PzqsTLWCx4AaABAg.9pmnSIUISGq9pn7aIf3VCD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]