Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You should know ai has neural network like humans do, but it has a few things missing,suh as inhibiting neuronal network other is insufficient sparsing. Later has been worked on to death by fine-tuning and pretty effective right now(GPT 4 isnt that good at that tho) ,it is just means less training data and less one line connections more accuracy. But first one didnt even tapped on yet basically which means ai cant stop itself from saying weird stuff ,lie and stop saying hallucinations. You can put restriction on it and to teach not to talk about some subjects but it will still be a pain in the ass. Plus it also makes ai to not understand negative sentences and words such as "do not" , "without" etc. There is also hallucination problem that still couldn't be fixed, and we still do not have an effective way to get rid of it. The only way to reduce it through increasing database,sparsing and fine-tuning so big advanced LLMs will have less hallucinations, but they will still have it. There is one more problem, and that is date. Since most of the LLMs only seen text they dont really have very orientation of time and place, plus gpt4 only trained until 2021 so it makes it extra confusing for it. This could be solved if we add other data types plus putting extra work on its time orientation, but none of the LLMs right now works on time orientation right now. They usually work on sparsing,attention and memory through fine tuning plus multiple modalities. We are still a lot to do.
youtube AI Responsibility 2023-06-19T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugye84O76NMihjxOd3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxwGbd_FPGWJup-EQ54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzwa7kYGqFCWZseJ794AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwe6X7EB2JX9S5iMdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxwnHz3iJR-rAWPBaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzAYrpU2yCn7d1MJVN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgziO3SWxcHkwIirQHZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxSMISfW9gzNGFEn4t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwCZ3Uxq2FcxOfEBjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzvrYfUTU_AlhlJdZJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]